00:00:00.001 Started by upstream project "autotest-nightly-lts" build number 2449 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3710 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.148 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.149 The recommended git tool is: git 00:00:00.149 using credential 00000000-0000-0000-0000-000000000002 00:00:00.152 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.195 Fetching changes from the remote Git repository 00:00:00.198 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.244 Using shallow fetch with depth 1 00:00:00.244 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.244 > git --version # timeout=10 00:00:00.276 > git --version # 'git version 2.39.2' 00:00:00.276 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.299 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.299 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.397 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.408 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.420 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:08.420 > git config core.sparsecheckout # timeout=10 00:00:08.430 > git read-tree -mu HEAD # timeout=10 00:00:08.446 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:08.468 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:08.468 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:08.574 [Pipeline] Start of Pipeline 00:00:08.587 [Pipeline] library 00:00:08.588 Loading library shm_lib@master 00:00:08.589 Library shm_lib@master is cached. Copying from home. 00:00:08.600 [Pipeline] node 00:00:08.614 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:08.616 [Pipeline] { 00:00:08.623 [Pipeline] catchError 00:00:08.625 [Pipeline] { 00:00:08.634 [Pipeline] wrap 00:00:08.640 [Pipeline] { 00:00:08.646 [Pipeline] stage 00:00:08.648 [Pipeline] { (Prologue) 00:00:08.660 [Pipeline] echo 00:00:08.661 Node: VM-host-SM38 00:00:08.665 [Pipeline] cleanWs 00:00:08.674 [WS-CLEANUP] Deleting project workspace... 00:00:08.674 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.682 [WS-CLEANUP] done 00:00:08.865 [Pipeline] setCustomBuildProperty 00:00:08.948 [Pipeline] httpRequest 00:00:09.661 [Pipeline] echo 00:00:09.663 Sorcerer 10.211.164.20 is alive 00:00:09.675 [Pipeline] retry 00:00:09.678 [Pipeline] { 00:00:09.696 [Pipeline] httpRequest 00:00:09.702 HttpMethod: GET 00:00:09.702 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.703 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.717 Response Code: HTTP/1.1 200 OK 00:00:09.717 Success: Status code 200 is in the accepted range: 200,404 00:00:09.718 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:16.860 [Pipeline] } 00:00:16.879 [Pipeline] // retry 00:00:16.889 [Pipeline] sh 00:00:17.174 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:17.193 [Pipeline] httpRequest 00:00:17.604 [Pipeline] echo 00:00:17.605 Sorcerer 10.211.164.20 is alive 00:00:17.617 [Pipeline] retry 00:00:17.619 [Pipeline] { 00:00:17.636 [Pipeline] httpRequest 00:00:17.642 HttpMethod: GET 00:00:17.643 URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:17.643 Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:17.664 Response Code: HTTP/1.1 200 OK 00:00:17.665 Success: Status code 200 is in the accepted range: 200,404 00:00:17.666 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:18.878 [Pipeline] } 00:01:18.896 [Pipeline] // retry 00:01:18.904 [Pipeline] sh 00:01:19.190 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:21.748 [Pipeline] sh 00:01:22.035 + git -C spdk log --oneline -n5 00:01:22.035 c13c99a5e test: Various fixes for Fedora40 00:01:22.035 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:01:22.035 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:01:22.035 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:01:22.035 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:01:22.056 [Pipeline] writeFile 00:01:22.071 [Pipeline] sh 00:01:22.361 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:22.374 [Pipeline] sh 00:01:22.652 + cat autorun-spdk.conf 00:01:22.652 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:22.652 SPDK_TEST_NVME=1 00:01:22.652 SPDK_TEST_FTL=1 00:01:22.652 SPDK_TEST_ISAL=1 00:01:22.652 SPDK_RUN_ASAN=1 00:01:22.652 SPDK_RUN_UBSAN=1 00:01:22.652 SPDK_TEST_XNVME=1 00:01:22.652 SPDK_TEST_NVME_FDP=1 00:01:22.652 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:22.660 RUN_NIGHTLY=1 00:01:22.662 [Pipeline] } 00:01:22.673 [Pipeline] // stage 00:01:22.686 [Pipeline] stage 00:01:22.688 [Pipeline] { (Run VM) 00:01:22.699 [Pipeline] sh 00:01:22.982 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:22.982 + echo 'Start stage prepare_nvme.sh' 00:01:22.982 Start stage prepare_nvme.sh 00:01:22.982 + [[ -n 9 ]] 00:01:22.982 + disk_prefix=ex9 00:01:22.982 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:22.982 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:22.982 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:22.982 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:22.982 ++ SPDK_TEST_NVME=1 00:01:22.982 ++ SPDK_TEST_FTL=1 00:01:22.982 ++ SPDK_TEST_ISAL=1 00:01:22.982 ++ SPDK_RUN_ASAN=1 00:01:22.982 ++ SPDK_RUN_UBSAN=1 00:01:22.982 ++ SPDK_TEST_XNVME=1 00:01:22.982 ++ SPDK_TEST_NVME_FDP=1 00:01:22.982 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:22.982 ++ RUN_NIGHTLY=1 00:01:22.982 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:22.982 + nvme_files=() 00:01:22.982 + declare -A nvme_files 00:01:22.982 + backend_dir=/var/lib/libvirt/images/backends 00:01:22.982 + nvme_files['nvme.img']=5G 00:01:22.982 + nvme_files['nvme-cmb.img']=5G 00:01:22.983 + nvme_files['nvme-multi0.img']=4G 00:01:22.983 + nvme_files['nvme-multi1.img']=4G 00:01:22.983 + nvme_files['nvme-multi2.img']=4G 00:01:22.983 + nvme_files['nvme-openstack.img']=8G 00:01:22.983 + nvme_files['nvme-zns.img']=5G 00:01:22.983 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:22.983 + (( SPDK_TEST_FTL == 1 )) 00:01:22.983 + nvme_files["nvme-ftl.img"]=6G 00:01:22.983 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:22.983 + nvme_files["nvme-fdp.img"]=1G 00:01:22.983 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:22.983 + for nvme in "${!nvme_files[@]}" 00:01:22.983 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-multi2.img -s 4G 00:01:22.983 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:22.983 + for nvme in "${!nvme_files[@]}" 00:01:22.983 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-ftl.img -s 6G 00:01:22.983 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:22.983 + for nvme in "${!nvme_files[@]}" 00:01:22.983 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-cmb.img -s 5G 00:01:22.983 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:22.983 + for nvme in "${!nvme_files[@]}" 00:01:22.983 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-openstack.img -s 8G 00:01:22.983 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:22.983 + for nvme in "${!nvme_files[@]}" 00:01:22.983 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-zns.img -s 5G 00:01:23.550 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:23.550 + for nvme in "${!nvme_files[@]}" 00:01:23.550 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-multi1.img -s 4G 00:01:23.550 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:23.550 + for nvme in "${!nvme_files[@]}" 00:01:23.550 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-multi0.img -s 4G 00:01:23.550 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:23.550 + for nvme in "${!nvme_files[@]}" 00:01:23.550 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-fdp.img -s 1G 00:01:23.550 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:23.550 + for nvme in "${!nvme_files[@]}" 00:01:23.550 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme.img -s 5G 00:01:24.121 Formatting '/var/lib/libvirt/images/backends/ex9-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:24.121 ++ sudo grep -rl ex9-nvme.img /etc/libvirt/qemu 00:01:24.121 + echo 'End stage prepare_nvme.sh' 00:01:24.121 End stage prepare_nvme.sh 00:01:24.133 [Pipeline] sh 00:01:24.478 + DISTRO=fedora39 00:01:24.478 + CPUS=10 00:01:24.478 + RAM=12288 00:01:24.478 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:24.478 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex9-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex9-nvme.img -b /var/lib/libvirt/images/backends/ex9-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex9-nvme-multi1.img:/var/lib/libvirt/images/backends/ex9-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex9-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:24.478 00:01:24.478 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:24.478 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:24.478 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:24.478 HELP=0 00:01:24.478 DRY_RUN=0 00:01:24.478 NVME_FILE=/var/lib/libvirt/images/backends/ex9-nvme-ftl.img,/var/lib/libvirt/images/backends/ex9-nvme.img,/var/lib/libvirt/images/backends/ex9-nvme-multi0.img,/var/lib/libvirt/images/backends/ex9-nvme-fdp.img, 00:01:24.478 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:24.478 NVME_AUTO_CREATE=0 00:01:24.478 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex9-nvme-multi1.img:/var/lib/libvirt/images/backends/ex9-nvme-multi2.img,, 00:01:24.478 NVME_CMB=,,,, 00:01:24.478 NVME_PMR=,,,, 00:01:24.478 NVME_ZNS=,,,, 00:01:24.478 NVME_MS=true,,,, 00:01:24.478 NVME_FDP=,,,on, 00:01:24.478 SPDK_VAGRANT_DISTRO=fedora39 00:01:24.478 SPDK_VAGRANT_VMCPU=10 00:01:24.478 SPDK_VAGRANT_VMRAM=12288 00:01:24.478 SPDK_VAGRANT_PROVIDER=libvirt 00:01:24.478 SPDK_VAGRANT_HTTP_PROXY= 00:01:24.478 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:24.478 SPDK_OPENSTACK_NETWORK=0 00:01:24.478 VAGRANT_PACKAGE_BOX=0 00:01:24.478 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:24.478 FORCE_DISTRO=true 00:01:24.478 VAGRANT_BOX_VERSION= 00:01:24.478 EXTRA_VAGRANTFILES= 00:01:24.478 NIC_MODEL=e1000 00:01:24.478 00:01:24.478 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:24.478 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:27.021 Bringing machine 'default' up with 'libvirt' provider... 00:01:27.282 ==> default: Creating image (snapshot of base box volume). 00:01:27.544 ==> default: Creating domain with the following settings... 00:01:27.544 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733666189_ab84f63103217948275a 00:01:27.544 ==> default: -- Domain type: kvm 00:01:27.544 ==> default: -- Cpus: 10 00:01:27.544 ==> default: -- Feature: acpi 00:01:27.544 ==> default: -- Feature: apic 00:01:27.544 ==> default: -- Feature: pae 00:01:27.544 ==> default: -- Memory: 12288M 00:01:27.544 ==> default: -- Memory Backing: hugepages: 00:01:27.544 ==> default: -- Management MAC: 00:01:27.544 ==> default: -- Loader: 00:01:27.544 ==> default: -- Nvram: 00:01:27.544 ==> default: -- Base box: spdk/fedora39 00:01:27.544 ==> default: -- Storage pool: default 00:01:27.544 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733666189_ab84f63103217948275a.img (20G) 00:01:27.544 ==> default: -- Volume Cache: default 00:01:27.544 ==> default: -- Kernel: 00:01:27.544 ==> default: -- Initrd: 00:01:27.544 ==> default: -- Graphics Type: vnc 00:01:27.544 ==> default: -- Graphics Port: -1 00:01:27.544 ==> default: -- Graphics IP: 127.0.0.1 00:01:27.544 ==> default: -- Graphics Password: Not defined 00:01:27.544 ==> default: -- Video Type: cirrus 00:01:27.544 ==> default: -- Video VRAM: 9216 00:01:27.544 ==> default: -- Sound Type: 00:01:27.544 ==> default: -- Keymap: en-us 00:01:27.544 ==> default: -- TPM Path: 00:01:27.544 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:27.544 ==> default: -- Command line args: 00:01:27.544 ==> default: -> value=-device, 00:01:27.544 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:27.544 ==> default: -> value=-drive, 00:01:27.544 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:27.544 ==> default: -> value=-device, 00:01:27.544 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:27.544 ==> default: -> value=-device, 00:01:27.544 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:01:27.544 ==> default: -> value=-drive, 00:01:27.544 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme.img,if=none,id=nvme-1-drive0, 00:01:27.544 ==> default: -> value=-device, 00:01:27.544 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:27.544 ==> default: -> value=-device, 00:01:27.544 ==> default: -> value=nvme,id=nvme-2,serial=12342, 00:01:27.544 ==> default: -> value=-drive, 00:01:27.544 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:27.544 ==> default: -> value=-device, 00:01:27.544 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:27.544 ==> default: -> value=-drive, 00:01:27.544 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:27.544 ==> default: -> value=-device, 00:01:27.544 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:27.544 ==> default: -> value=-drive, 00:01:27.544 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:27.544 ==> default: -> value=-device, 00:01:27.544 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:27.544 ==> default: -> value=-device, 00:01:27.544 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:27.544 ==> default: -> value=-device, 00:01:27.544 ==> default: -> value=nvme,id=nvme-3,serial=12343,subsys=fdp-subsys3, 00:01:27.544 ==> default: -> value=-drive, 00:01:27.544 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:27.544 ==> default: -> value=-device, 00:01:27.544 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:27.544 ==> default: Creating shared folders metadata... 00:01:27.805 ==> default: Starting domain. 00:01:29.723 ==> default: Waiting for domain to get an IP address... 00:01:47.874 ==> default: Waiting for SSH to become available... 00:01:47.874 ==> default: Configuring and enabling network interfaces... 00:01:51.179 default: SSH address: 192.168.121.50:22 00:01:51.179 default: SSH username: vagrant 00:01:51.179 default: SSH auth method: private key 00:01:53.091 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:59.651 ==> default: Mounting SSHFS shared folder... 00:02:00.585 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:00.585 ==> default: Checking Mount.. 00:02:01.520 ==> default: Folder Successfully Mounted! 00:02:01.520 00:02:01.520 SUCCESS! 00:02:01.520 00:02:01.520 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:01.520 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:01.520 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:01.520 00:02:01.530 [Pipeline] } 00:02:01.544 [Pipeline] // stage 00:02:01.552 [Pipeline] dir 00:02:01.553 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:01.554 [Pipeline] { 00:02:01.566 [Pipeline] catchError 00:02:01.567 [Pipeline] { 00:02:01.579 [Pipeline] sh 00:02:01.856 + vagrant ssh-config --host vagrant 00:02:01.856 + sed -ne '/^Host/,$p' 00:02:01.856 + tee ssh_conf 00:02:04.393 Host vagrant 00:02:04.393 HostName 192.168.121.50 00:02:04.393 User vagrant 00:02:04.393 Port 22 00:02:04.393 UserKnownHostsFile /dev/null 00:02:04.393 StrictHostKeyChecking no 00:02:04.393 PasswordAuthentication no 00:02:04.393 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:04.393 IdentitiesOnly yes 00:02:04.393 LogLevel FATAL 00:02:04.393 ForwardAgent yes 00:02:04.393 ForwardX11 yes 00:02:04.393 00:02:04.410 [Pipeline] withEnv 00:02:04.412 [Pipeline] { 00:02:04.426 [Pipeline] sh 00:02:04.706 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:04.706 source /etc/os-release 00:02:04.706 [[ -e /image.version ]] && img=$(< /image.version) 00:02:04.706 # Minimal, systemd-like check. 00:02:04.706 if [[ -e /.dockerenv ]]; then 00:02:04.706 # Clear garbage from the node'\''s name: 00:02:04.706 # agt-er_autotest_547-896 -> autotest_547-896 00:02:04.706 # $HOSTNAME is the actual container id 00:02:04.706 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:04.706 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:04.706 # We can assume this is a mount from a host where container is running, 00:02:04.706 # so fetch its hostname to easily identify the target swarm worker. 00:02:04.706 container="$(< /etc/hostname) ($agent)" 00:02:04.706 else 00:02:04.706 # Fallback 00:02:04.706 container=$agent 00:02:04.706 fi 00:02:04.706 fi 00:02:04.706 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:04.706 ' 00:02:04.718 [Pipeline] } 00:02:04.733 [Pipeline] // withEnv 00:02:04.743 [Pipeline] setCustomBuildProperty 00:02:04.758 [Pipeline] stage 00:02:04.762 [Pipeline] { (Tests) 00:02:04.782 [Pipeline] sh 00:02:05.061 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:05.075 [Pipeline] sh 00:02:05.355 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:05.370 [Pipeline] timeout 00:02:05.370 Timeout set to expire in 50 min 00:02:05.372 [Pipeline] { 00:02:05.386 [Pipeline] sh 00:02:05.669 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:06.241 HEAD is now at c13c99a5e test: Various fixes for Fedora40 00:02:06.255 [Pipeline] sh 00:02:06.539 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:06.817 [Pipeline] sh 00:02:07.105 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:07.383 [Pipeline] sh 00:02:07.668 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:07.668 ++ readlink -f spdk_repo 00:02:07.929 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:07.929 + [[ -n /home/vagrant/spdk_repo ]] 00:02:07.929 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:07.929 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:07.929 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:07.929 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:07.929 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:07.929 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:07.929 + cd /home/vagrant/spdk_repo 00:02:07.929 + source /etc/os-release 00:02:07.929 ++ NAME='Fedora Linux' 00:02:07.929 ++ VERSION='39 (Cloud Edition)' 00:02:07.929 ++ ID=fedora 00:02:07.929 ++ VERSION_ID=39 00:02:07.929 ++ VERSION_CODENAME= 00:02:07.929 ++ PLATFORM_ID=platform:f39 00:02:07.929 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:07.929 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:07.929 ++ LOGO=fedora-logo-icon 00:02:07.929 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:07.929 ++ HOME_URL=https://fedoraproject.org/ 00:02:07.929 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:07.929 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:07.929 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:07.929 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:07.929 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:07.929 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:07.929 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:07.929 ++ SUPPORT_END=2024-11-12 00:02:07.929 ++ VARIANT='Cloud Edition' 00:02:07.929 ++ VARIANT_ID=cloud 00:02:07.929 + uname -a 00:02:07.929 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:07.929 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:07.929 Hugepages 00:02:07.929 node hugesize free / total 00:02:07.929 node0 1048576kB 0 / 0 00:02:07.929 node0 2048kB 0 / 0 00:02:07.929 00:02:07.929 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:07.929 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:07.929 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:07.929 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:07.929 NVMe 0000:00:08.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:07.929 NVMe 0000:00:09.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:07.929 + rm -f /tmp/spdk-ld-path 00:02:07.929 + source autorun-spdk.conf 00:02:07.929 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:07.929 ++ SPDK_TEST_NVME=1 00:02:07.929 ++ SPDK_TEST_FTL=1 00:02:07.929 ++ SPDK_TEST_ISAL=1 00:02:07.929 ++ SPDK_RUN_ASAN=1 00:02:07.929 ++ SPDK_RUN_UBSAN=1 00:02:07.929 ++ SPDK_TEST_XNVME=1 00:02:07.929 ++ SPDK_TEST_NVME_FDP=1 00:02:07.929 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:07.929 ++ RUN_NIGHTLY=1 00:02:07.929 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:07.929 + [[ -n '' ]] 00:02:07.929 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:08.189 + for M in /var/spdk/build-*-manifest.txt 00:02:08.189 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:08.189 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:08.189 + for M in /var/spdk/build-*-manifest.txt 00:02:08.189 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:08.189 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:08.189 + for M in /var/spdk/build-*-manifest.txt 00:02:08.189 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:08.189 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:08.189 ++ uname 00:02:08.189 + [[ Linux == \L\i\n\u\x ]] 00:02:08.189 + sudo dmesg -T 00:02:08.189 + sudo dmesg --clear 00:02:08.189 + dmesg_pid=4982 00:02:08.189 + [[ Fedora Linux == FreeBSD ]] 00:02:08.189 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:08.189 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:08.189 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:08.189 + sudo dmesg -Tw 00:02:08.189 + [[ -x /usr/src/fio-static/fio ]] 00:02:08.189 + export FIO_BIN=/usr/src/fio-static/fio 00:02:08.189 + FIO_BIN=/usr/src/fio-static/fio 00:02:08.189 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:08.189 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:08.189 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:08.189 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:08.189 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:08.189 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:08.189 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:08.189 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:08.189 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:08.189 Test configuration: 00:02:08.189 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:08.189 SPDK_TEST_NVME=1 00:02:08.189 SPDK_TEST_FTL=1 00:02:08.189 SPDK_TEST_ISAL=1 00:02:08.189 SPDK_RUN_ASAN=1 00:02:08.189 SPDK_RUN_UBSAN=1 00:02:08.189 SPDK_TEST_XNVME=1 00:02:08.189 SPDK_TEST_NVME_FDP=1 00:02:08.189 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:08.189 RUN_NIGHTLY=1 13:57:11 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:02:08.189 13:57:11 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:08.189 13:57:11 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:08.189 13:57:11 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:08.189 13:57:11 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:08.189 13:57:11 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.190 13:57:11 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.190 13:57:11 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.190 13:57:11 -- paths/export.sh@5 -- $ export PATH 00:02:08.190 13:57:11 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.190 13:57:11 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:08.190 13:57:11 -- common/autobuild_common.sh@440 -- $ date +%s 00:02:08.190 13:57:11 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1733666231.XXXXXX 00:02:08.190 13:57:11 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1733666231.S6fw8z 00:02:08.190 13:57:11 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:02:08.190 13:57:11 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:02:08.190 13:57:11 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:08.190 13:57:11 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:08.190 13:57:11 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:08.190 13:57:11 -- common/autobuild_common.sh@456 -- $ get_config_params 00:02:08.190 13:57:11 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:02:08.190 13:57:11 -- common/autotest_common.sh@10 -- $ set +x 00:02:08.190 13:57:11 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:02:08.190 13:57:11 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:08.190 13:57:11 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:08.190 13:57:11 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:08.190 13:57:11 -- spdk/autobuild.sh@16 -- $ date -u 00:02:08.190 Sun Dec 8 01:57:11 PM UTC 2024 00:02:08.190 13:57:11 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:08.190 LTS-67-gc13c99a5e 00:02:08.190 13:57:11 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:08.190 13:57:11 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:08.190 13:57:11 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:08.190 13:57:11 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:08.190 13:57:11 -- common/autotest_common.sh@10 -- $ set +x 00:02:08.190 ************************************ 00:02:08.190 START TEST asan 00:02:08.190 ************************************ 00:02:08.190 using asan 00:02:08.190 13:57:11 -- common/autotest_common.sh@1114 -- $ echo 'using asan' 00:02:08.190 00:02:08.190 real 0m0.000s 00:02:08.190 user 0m0.000s 00:02:08.190 sys 0m0.000s 00:02:08.190 13:57:11 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:08.190 ************************************ 00:02:08.190 END TEST asan 00:02:08.190 ************************************ 00:02:08.190 13:57:11 -- common/autotest_common.sh@10 -- $ set +x 00:02:08.450 13:57:11 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:08.450 13:57:11 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:08.450 13:57:11 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:08.450 13:57:11 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:08.450 13:57:11 -- common/autotest_common.sh@10 -- $ set +x 00:02:08.451 ************************************ 00:02:08.451 START TEST ubsan 00:02:08.451 ************************************ 00:02:08.451 using ubsan 00:02:08.451 13:57:11 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:02:08.451 00:02:08.451 real 0m0.000s 00:02:08.451 user 0m0.000s 00:02:08.451 sys 0m0.000s 00:02:08.451 13:57:11 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:08.451 ************************************ 00:02:08.451 END TEST ubsan 00:02:08.451 ************************************ 00:02:08.451 13:57:11 -- common/autotest_common.sh@10 -- $ set +x 00:02:08.451 13:57:11 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:08.451 13:57:11 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:08.451 13:57:11 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:08.451 13:57:11 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:08.451 13:57:11 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:08.451 13:57:11 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:08.451 13:57:11 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:08.451 13:57:11 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:08.451 13:57:11 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:08.451 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:08.451 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:08.711 Using 'verbs' RDMA provider 00:02:21.885 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:02:31.992 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:02:31.992 Creating mk/config.mk...done. 00:02:31.992 Creating mk/cc.flags.mk...done. 00:02:31.992 Type 'make' to build. 00:02:31.992 13:57:34 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:31.992 13:57:34 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:31.992 13:57:34 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:31.992 13:57:34 -- common/autotest_common.sh@10 -- $ set +x 00:02:31.992 ************************************ 00:02:31.992 START TEST make 00:02:31.992 ************************************ 00:02:31.992 13:57:34 -- common/autotest_common.sh@1114 -- $ make -j10 00:02:31.992 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:31.992 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:31.992 meson setup builddir \ 00:02:31.992 -Dwith-libaio=enabled \ 00:02:31.992 -Dwith-liburing=enabled \ 00:02:31.992 -Dwith-libvfn=disabled \ 00:02:31.992 -Dwith-spdk=false && \ 00:02:31.992 meson compile -C builddir && \ 00:02:31.992 cd -) 00:02:31.992 make[1]: Nothing to be done for 'all'. 00:02:33.898 The Meson build system 00:02:33.898 Version: 1.5.0 00:02:33.898 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:33.898 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:33.898 Build type: native build 00:02:33.898 Project name: xnvme 00:02:33.898 Project version: 0.7.3 00:02:33.898 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:33.898 C linker for the host machine: cc ld.bfd 2.40-14 00:02:33.898 Host machine cpu family: x86_64 00:02:33.898 Host machine cpu: x86_64 00:02:33.898 Message: host_machine.system: linux 00:02:33.898 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:33.898 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:33.898 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:33.898 Run-time dependency threads found: YES 00:02:33.898 Has header "setupapi.h" : NO 00:02:33.898 Has header "linux/blkzoned.h" : YES 00:02:33.898 Has header "linux/blkzoned.h" : YES (cached) 00:02:33.898 Has header "libaio.h" : YES 00:02:33.898 Library aio found: YES 00:02:33.898 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:33.898 Run-time dependency liburing found: YES 2.2 00:02:33.898 Dependency libvfn skipped: feature with-libvfn disabled 00:02:33.898 Run-time dependency appleframeworks found: NO (tried framework) 00:02:33.898 Run-time dependency appleframeworks found: NO (tried framework) 00:02:33.898 Configuring xnvme_config.h using configuration 00:02:33.898 Configuring xnvme.spec using configuration 00:02:33.898 Run-time dependency bash-completion found: YES 2.11 00:02:33.898 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:33.898 Program cp found: YES (/usr/bin/cp) 00:02:33.898 Has header "winsock2.h" : NO 00:02:33.898 Has header "dbghelp.h" : NO 00:02:33.898 Library rpcrt4 found: NO 00:02:33.898 Library rt found: YES 00:02:33.898 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:33.898 Found CMake: /usr/bin/cmake (3.27.7) 00:02:33.898 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:02:33.898 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:02:33.898 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:02:33.898 Build targets in project: 32 00:02:33.898 00:02:33.898 xnvme 0.7.3 00:02:33.898 00:02:33.898 User defined options 00:02:33.898 with-libaio : enabled 00:02:33.898 with-liburing: enabled 00:02:33.898 with-libvfn : disabled 00:02:33.898 with-spdk : false 00:02:33.898 00:02:33.898 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:34.465 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:34.465 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:02:34.465 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:02:34.465 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:02:34.465 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:02:34.465 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:02:34.465 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:02:34.465 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:02:34.465 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:02:34.465 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:02:34.465 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:02:34.465 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:02:34.465 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:02:34.465 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:02:34.465 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:02:34.465 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:02:34.465 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:02:34.465 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:02:34.465 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:02:34.465 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:02:34.465 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:02:34.465 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:02:34.722 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:02:34.722 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:02:34.722 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:02:34.722 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:02:34.722 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:02:34.722 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:02:34.722 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:02:34.722 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:02:34.722 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:02:34.722 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:02:34.722 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:02:34.722 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:02:34.722 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:02:34.722 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:02:34.722 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:02:34.722 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:02:34.722 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:02:34.722 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:02:34.722 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:02:34.722 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:02:34.722 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:02:34.722 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:02:34.722 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:02:34.722 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:02:34.722 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:02:34.722 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:02:34.722 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:02:34.722 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:02:34.722 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:02:34.722 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:02:34.722 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:02:34.722 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:02:34.722 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:02:34.722 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:02:34.722 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:02:34.722 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:02:34.723 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:02:34.723 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:02:34.723 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:02:34.980 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:02:34.980 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:02:34.980 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:02:34.980 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:02:34.980 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:02:34.980 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:02:34.980 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:02:34.980 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:02:34.980 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:02:34.980 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:02:34.980 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:02:34.980 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:02:34.980 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:02:34.980 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:02:34.980 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:02:34.980 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:02:34.980 [77/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:02:34.980 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:02:34.980 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:02:34.980 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:02:34.980 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:02:34.980 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:02:34.981 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:02:34.981 [84/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:02:35.238 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:02:35.238 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:02:35.238 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:02:35.238 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:02:35.238 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:02:35.238 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:02:35.238 [91/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:02:35.238 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:02:35.238 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:02:35.238 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:02:35.238 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:02:35.238 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:02:35.238 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:02:35.239 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:02:35.239 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:02:35.239 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:02:35.239 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:02:35.239 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:02:35.239 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:02:35.239 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:02:35.239 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:02:35.239 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:02:35.239 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:02:35.239 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:02:35.239 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:02:35.239 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:02:35.239 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:02:35.239 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:02:35.239 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:02:35.239 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:02:35.239 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:02:35.239 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:02:35.239 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:02:35.239 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:02:35.239 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:02:35.239 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:02:35.239 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:02:35.496 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:02:35.496 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:02:35.496 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:02:35.496 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:02:35.496 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:02:35.496 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:02:35.496 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:02:35.496 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:02:35.496 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:02:35.496 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:02:35.496 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:02:35.496 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:02:35.496 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:02:35.496 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:02:35.496 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:02:35.496 [137/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:02:35.496 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:02:35.496 [139/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:02:35.496 [140/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:02:35.496 [141/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:02:35.754 [142/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:02:35.754 [143/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:02:35.754 [144/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:02:35.754 [145/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:02:35.754 [146/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:02:35.754 [147/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:02:35.754 [148/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:02:35.754 [149/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:02:35.754 [150/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:02:35.754 [151/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:02:35.754 [152/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:02:35.754 [153/203] Linking target lib/libxnvme.so 00:02:35.754 [154/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:02:35.754 [155/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:02:35.754 [156/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:02:35.754 [157/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:02:35.754 [158/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:02:35.754 [159/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:02:35.754 [160/203] Compiling C object tools/xdd.p/xdd.c.o 00:02:35.754 [161/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:02:36.012 [162/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:02:36.012 [163/203] Compiling C object tools/lblk.p/lblk.c.o 00:02:36.012 [164/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:02:36.012 [165/203] Compiling C object tools/kvs.p/kvs.c.o 00:02:36.012 [166/203] Compiling C object tools/zoned.p/zoned.c.o 00:02:36.012 [167/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:02:36.012 [168/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:02:36.012 [169/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:02:36.012 [170/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:02:36.012 [171/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:02:36.012 [172/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:02:36.269 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:02:36.269 [174/203] Linking static target lib/libxnvme.a 00:02:36.269 [175/203] Linking target tests/xnvme_tests_async_intf 00:02:36.269 [176/203] Linking target tests/xnvme_tests_znd_state 00:02:36.269 [177/203] Linking target tests/xnvme_tests_lblk 00:02:36.269 [178/203] Linking target tests/xnvme_tests_cli 00:02:36.269 [179/203] Linking target tests/xnvme_tests_enum 00:02:36.269 [180/203] Linking target tests/xnvme_tests_ioworker 00:02:36.269 [181/203] Linking target tests/xnvme_tests_xnvme_file 00:02:36.269 [182/203] Linking target tests/xnvme_tests_buf 00:02:36.269 [183/203] Linking target tests/xnvme_tests_scc 00:02:36.269 [184/203] Linking target tests/xnvme_tests_znd_append 00:02:36.269 [185/203] Linking target tests/xnvme_tests_xnvme_cli 00:02:36.269 [186/203] Linking target tests/xnvme_tests_znd_explicit_open 00:02:36.269 [187/203] Linking target tests/xnvme_tests_znd_zrwa 00:02:36.269 [188/203] Linking target tests/xnvme_tests_map 00:02:36.269 [189/203] Linking target tools/xdd 00:02:36.269 [190/203] Linking target tools/lblk 00:02:36.269 [191/203] Linking target tests/xnvme_tests_kvs 00:02:36.269 [192/203] Linking target tools/zoned 00:02:36.269 [193/203] Linking target examples/xnvme_hello 00:02:36.269 [194/203] Linking target tools/kvs 00:02:36.269 [195/203] Linking target examples/xnvme_single_async 00:02:36.269 [196/203] Linking target examples/xnvme_dev 00:02:36.269 [197/203] Linking target examples/xnvme_enum 00:02:36.269 [198/203] Linking target examples/xnvme_io_async 00:02:36.269 [199/203] Linking target tools/xnvme_file 00:02:36.269 [200/203] Linking target tools/xnvme 00:02:36.269 [201/203] Linking target examples/zoned_io_sync 00:02:36.269 [202/203] Linking target examples/xnvme_single_sync 00:02:36.269 [203/203] Linking target examples/zoned_io_async 00:02:36.326 INFO: autodetecting backend as ninja 00:02:36.326 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:36.326 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:41.580 The Meson build system 00:02:41.580 Version: 1.5.0 00:02:41.580 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:41.580 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:41.580 Build type: native build 00:02:41.580 Program cat found: YES (/usr/bin/cat) 00:02:41.580 Project name: DPDK 00:02:41.580 Project version: 23.11.0 00:02:41.580 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:41.580 C linker for the host machine: cc ld.bfd 2.40-14 00:02:41.580 Host machine cpu family: x86_64 00:02:41.580 Host machine cpu: x86_64 00:02:41.580 Message: ## Building in Developer Mode ## 00:02:41.580 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:41.580 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:41.580 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:41.580 Program python3 found: YES (/usr/bin/python3) 00:02:41.580 Program cat found: YES (/usr/bin/cat) 00:02:41.580 Compiler for C supports arguments -march=native: YES 00:02:41.580 Checking for size of "void *" : 8 00:02:41.580 Checking for size of "void *" : 8 (cached) 00:02:41.580 Library m found: YES 00:02:41.580 Library numa found: YES 00:02:41.580 Has header "numaif.h" : YES 00:02:41.580 Library fdt found: NO 00:02:41.580 Library execinfo found: NO 00:02:41.580 Has header "execinfo.h" : YES 00:02:41.580 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:41.580 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:41.580 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:41.580 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:41.580 Run-time dependency openssl found: YES 3.1.1 00:02:41.580 Run-time dependency libpcap found: YES 1.10.4 00:02:41.580 Has header "pcap.h" with dependency libpcap: YES 00:02:41.580 Compiler for C supports arguments -Wcast-qual: YES 00:02:41.580 Compiler for C supports arguments -Wdeprecated: YES 00:02:41.580 Compiler for C supports arguments -Wformat: YES 00:02:41.580 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:41.580 Compiler for C supports arguments -Wformat-security: NO 00:02:41.580 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:41.580 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:41.580 Compiler for C supports arguments -Wnested-externs: YES 00:02:41.580 Compiler for C supports arguments -Wold-style-definition: YES 00:02:41.580 Compiler for C supports arguments -Wpointer-arith: YES 00:02:41.580 Compiler for C supports arguments -Wsign-compare: YES 00:02:41.580 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:41.580 Compiler for C supports arguments -Wundef: YES 00:02:41.580 Compiler for C supports arguments -Wwrite-strings: YES 00:02:41.580 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:41.580 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:41.580 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:41.580 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:41.580 Program objdump found: YES (/usr/bin/objdump) 00:02:41.580 Compiler for C supports arguments -mavx512f: YES 00:02:41.580 Checking if "AVX512 checking" compiles: YES 00:02:41.580 Fetching value of define "__SSE4_2__" : 1 00:02:41.580 Fetching value of define "__AES__" : 1 00:02:41.580 Fetching value of define "__AVX__" : 1 00:02:41.580 Fetching value of define "__AVX2__" : 1 00:02:41.580 Fetching value of define "__AVX512BW__" : 1 00:02:41.580 Fetching value of define "__AVX512CD__" : 1 00:02:41.580 Fetching value of define "__AVX512DQ__" : 1 00:02:41.580 Fetching value of define "__AVX512F__" : 1 00:02:41.580 Fetching value of define "__AVX512VL__" : 1 00:02:41.580 Fetching value of define "__PCLMUL__" : 1 00:02:41.580 Fetching value of define "__RDRND__" : 1 00:02:41.580 Fetching value of define "__RDSEED__" : 1 00:02:41.580 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:41.580 Fetching value of define "__znver1__" : (undefined) 00:02:41.580 Fetching value of define "__znver2__" : (undefined) 00:02:41.580 Fetching value of define "__znver3__" : (undefined) 00:02:41.580 Fetching value of define "__znver4__" : (undefined) 00:02:41.580 Library asan found: YES 00:02:41.580 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:41.580 Message: lib/log: Defining dependency "log" 00:02:41.580 Message: lib/kvargs: Defining dependency "kvargs" 00:02:41.580 Message: lib/telemetry: Defining dependency "telemetry" 00:02:41.580 Library rt found: YES 00:02:41.580 Checking for function "getentropy" : NO 00:02:41.580 Message: lib/eal: Defining dependency "eal" 00:02:41.580 Message: lib/ring: Defining dependency "ring" 00:02:41.580 Message: lib/rcu: Defining dependency "rcu" 00:02:41.580 Message: lib/mempool: Defining dependency "mempool" 00:02:41.580 Message: lib/mbuf: Defining dependency "mbuf" 00:02:41.580 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:41.580 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:41.580 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:41.580 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:41.580 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:41.580 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:41.580 Compiler for C supports arguments -mpclmul: YES 00:02:41.580 Compiler for C supports arguments -maes: YES 00:02:41.580 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:41.580 Compiler for C supports arguments -mavx512bw: YES 00:02:41.580 Compiler for C supports arguments -mavx512dq: YES 00:02:41.580 Compiler for C supports arguments -mavx512vl: YES 00:02:41.580 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:41.580 Compiler for C supports arguments -mavx2: YES 00:02:41.580 Compiler for C supports arguments -mavx: YES 00:02:41.580 Message: lib/net: Defining dependency "net" 00:02:41.580 Message: lib/meter: Defining dependency "meter" 00:02:41.580 Message: lib/ethdev: Defining dependency "ethdev" 00:02:41.580 Message: lib/pci: Defining dependency "pci" 00:02:41.580 Message: lib/cmdline: Defining dependency "cmdline" 00:02:41.580 Message: lib/hash: Defining dependency "hash" 00:02:41.580 Message: lib/timer: Defining dependency "timer" 00:02:41.580 Message: lib/compressdev: Defining dependency "compressdev" 00:02:41.580 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:41.580 Message: lib/dmadev: Defining dependency "dmadev" 00:02:41.580 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:41.580 Message: lib/power: Defining dependency "power" 00:02:41.580 Message: lib/reorder: Defining dependency "reorder" 00:02:41.580 Message: lib/security: Defining dependency "security" 00:02:41.580 Has header "linux/userfaultfd.h" : YES 00:02:41.580 Has header "linux/vduse.h" : YES 00:02:41.580 Message: lib/vhost: Defining dependency "vhost" 00:02:41.580 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:41.580 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:41.580 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:41.580 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:41.580 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:41.580 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:41.580 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:41.580 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:41.580 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:41.580 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:41.580 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:41.580 Configuring doxy-api-html.conf using configuration 00:02:41.580 Configuring doxy-api-man.conf using configuration 00:02:41.580 Program mandb found: YES (/usr/bin/mandb) 00:02:41.580 Program sphinx-build found: NO 00:02:41.580 Configuring rte_build_config.h using configuration 00:02:41.580 Message: 00:02:41.580 ================= 00:02:41.580 Applications Enabled 00:02:41.580 ================= 00:02:41.580 00:02:41.580 apps: 00:02:41.580 00:02:41.580 00:02:41.580 Message: 00:02:41.580 ================= 00:02:41.580 Libraries Enabled 00:02:41.580 ================= 00:02:41.580 00:02:41.580 libs: 00:02:41.580 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:41.580 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:41.580 cryptodev, dmadev, power, reorder, security, vhost, 00:02:41.580 00:02:41.580 Message: 00:02:41.580 =============== 00:02:41.580 Drivers Enabled 00:02:41.580 =============== 00:02:41.580 00:02:41.580 common: 00:02:41.580 00:02:41.580 bus: 00:02:41.580 pci, vdev, 00:02:41.580 mempool: 00:02:41.580 ring, 00:02:41.580 dma: 00:02:41.580 00:02:41.580 net: 00:02:41.580 00:02:41.580 crypto: 00:02:41.580 00:02:41.580 compress: 00:02:41.580 00:02:41.580 vdpa: 00:02:41.580 00:02:41.580 00:02:41.580 Message: 00:02:41.580 ================= 00:02:41.581 Content Skipped 00:02:41.581 ================= 00:02:41.581 00:02:41.581 apps: 00:02:41.581 dumpcap: explicitly disabled via build config 00:02:41.581 graph: explicitly disabled via build config 00:02:41.581 pdump: explicitly disabled via build config 00:02:41.581 proc-info: explicitly disabled via build config 00:02:41.581 test-acl: explicitly disabled via build config 00:02:41.581 test-bbdev: explicitly disabled via build config 00:02:41.581 test-cmdline: explicitly disabled via build config 00:02:41.581 test-compress-perf: explicitly disabled via build config 00:02:41.581 test-crypto-perf: explicitly disabled via build config 00:02:41.581 test-dma-perf: explicitly disabled via build config 00:02:41.581 test-eventdev: explicitly disabled via build config 00:02:41.581 test-fib: explicitly disabled via build config 00:02:41.581 test-flow-perf: explicitly disabled via build config 00:02:41.581 test-gpudev: explicitly disabled via build config 00:02:41.581 test-mldev: explicitly disabled via build config 00:02:41.581 test-pipeline: explicitly disabled via build config 00:02:41.581 test-pmd: explicitly disabled via build config 00:02:41.581 test-regex: explicitly disabled via build config 00:02:41.581 test-sad: explicitly disabled via build config 00:02:41.581 test-security-perf: explicitly disabled via build config 00:02:41.581 00:02:41.581 libs: 00:02:41.581 metrics: explicitly disabled via build config 00:02:41.581 acl: explicitly disabled via build config 00:02:41.581 bbdev: explicitly disabled via build config 00:02:41.581 bitratestats: explicitly disabled via build config 00:02:41.581 bpf: explicitly disabled via build config 00:02:41.581 cfgfile: explicitly disabled via build config 00:02:41.581 distributor: explicitly disabled via build config 00:02:41.581 efd: explicitly disabled via build config 00:02:41.581 eventdev: explicitly disabled via build config 00:02:41.581 dispatcher: explicitly disabled via build config 00:02:41.581 gpudev: explicitly disabled via build config 00:02:41.581 gro: explicitly disabled via build config 00:02:41.581 gso: explicitly disabled via build config 00:02:41.581 ip_frag: explicitly disabled via build config 00:02:41.581 jobstats: explicitly disabled via build config 00:02:41.581 latencystats: explicitly disabled via build config 00:02:41.581 lpm: explicitly disabled via build config 00:02:41.581 member: explicitly disabled via build config 00:02:41.581 pcapng: explicitly disabled via build config 00:02:41.581 rawdev: explicitly disabled via build config 00:02:41.581 regexdev: explicitly disabled via build config 00:02:41.581 mldev: explicitly disabled via build config 00:02:41.581 rib: explicitly disabled via build config 00:02:41.581 sched: explicitly disabled via build config 00:02:41.581 stack: explicitly disabled via build config 00:02:41.581 ipsec: explicitly disabled via build config 00:02:41.581 pdcp: explicitly disabled via build config 00:02:41.581 fib: explicitly disabled via build config 00:02:41.581 port: explicitly disabled via build config 00:02:41.581 pdump: explicitly disabled via build config 00:02:41.581 table: explicitly disabled via build config 00:02:41.581 pipeline: explicitly disabled via build config 00:02:41.581 graph: explicitly disabled via build config 00:02:41.581 node: explicitly disabled via build config 00:02:41.581 00:02:41.581 drivers: 00:02:41.581 common/cpt: not in enabled drivers build config 00:02:41.581 common/dpaax: not in enabled drivers build config 00:02:41.581 common/iavf: not in enabled drivers build config 00:02:41.581 common/idpf: not in enabled drivers build config 00:02:41.581 common/mvep: not in enabled drivers build config 00:02:41.581 common/octeontx: not in enabled drivers build config 00:02:41.581 bus/auxiliary: not in enabled drivers build config 00:02:41.581 bus/cdx: not in enabled drivers build config 00:02:41.581 bus/dpaa: not in enabled drivers build config 00:02:41.581 bus/fslmc: not in enabled drivers build config 00:02:41.581 bus/ifpga: not in enabled drivers build config 00:02:41.581 bus/platform: not in enabled drivers build config 00:02:41.581 bus/vmbus: not in enabled drivers build config 00:02:41.581 common/cnxk: not in enabled drivers build config 00:02:41.581 common/mlx5: not in enabled drivers build config 00:02:41.581 common/nfp: not in enabled drivers build config 00:02:41.581 common/qat: not in enabled drivers build config 00:02:41.581 common/sfc_efx: not in enabled drivers build config 00:02:41.581 mempool/bucket: not in enabled drivers build config 00:02:41.581 mempool/cnxk: not in enabled drivers build config 00:02:41.581 mempool/dpaa: not in enabled drivers build config 00:02:41.581 mempool/dpaa2: not in enabled drivers build config 00:02:41.581 mempool/octeontx: not in enabled drivers build config 00:02:41.581 mempool/stack: not in enabled drivers build config 00:02:41.581 dma/cnxk: not in enabled drivers build config 00:02:41.581 dma/dpaa: not in enabled drivers build config 00:02:41.581 dma/dpaa2: not in enabled drivers build config 00:02:41.581 dma/hisilicon: not in enabled drivers build config 00:02:41.581 dma/idxd: not in enabled drivers build config 00:02:41.581 dma/ioat: not in enabled drivers build config 00:02:41.581 dma/skeleton: not in enabled drivers build config 00:02:41.581 net/af_packet: not in enabled drivers build config 00:02:41.581 net/af_xdp: not in enabled drivers build config 00:02:41.581 net/ark: not in enabled drivers build config 00:02:41.581 net/atlantic: not in enabled drivers build config 00:02:41.581 net/avp: not in enabled drivers build config 00:02:41.581 net/axgbe: not in enabled drivers build config 00:02:41.581 net/bnx2x: not in enabled drivers build config 00:02:41.581 net/bnxt: not in enabled drivers build config 00:02:41.581 net/bonding: not in enabled drivers build config 00:02:41.581 net/cnxk: not in enabled drivers build config 00:02:41.581 net/cpfl: not in enabled drivers build config 00:02:41.581 net/cxgbe: not in enabled drivers build config 00:02:41.581 net/dpaa: not in enabled drivers build config 00:02:41.581 net/dpaa2: not in enabled drivers build config 00:02:41.581 net/e1000: not in enabled drivers build config 00:02:41.581 net/ena: not in enabled drivers build config 00:02:41.581 net/enetc: not in enabled drivers build config 00:02:41.581 net/enetfec: not in enabled drivers build config 00:02:41.581 net/enic: not in enabled drivers build config 00:02:41.581 net/failsafe: not in enabled drivers build config 00:02:41.581 net/fm10k: not in enabled drivers build config 00:02:41.581 net/gve: not in enabled drivers build config 00:02:41.581 net/hinic: not in enabled drivers build config 00:02:41.581 net/hns3: not in enabled drivers build config 00:02:41.581 net/i40e: not in enabled drivers build config 00:02:41.581 net/iavf: not in enabled drivers build config 00:02:41.581 net/ice: not in enabled drivers build config 00:02:41.581 net/idpf: not in enabled drivers build config 00:02:41.581 net/igc: not in enabled drivers build config 00:02:41.581 net/ionic: not in enabled drivers build config 00:02:41.581 net/ipn3ke: not in enabled drivers build config 00:02:41.581 net/ixgbe: not in enabled drivers build config 00:02:41.581 net/mana: not in enabled drivers build config 00:02:41.581 net/memif: not in enabled drivers build config 00:02:41.581 net/mlx4: not in enabled drivers build config 00:02:41.581 net/mlx5: not in enabled drivers build config 00:02:41.581 net/mvneta: not in enabled drivers build config 00:02:41.581 net/mvpp2: not in enabled drivers build config 00:02:41.581 net/netvsc: not in enabled drivers build config 00:02:41.581 net/nfb: not in enabled drivers build config 00:02:41.581 net/nfp: not in enabled drivers build config 00:02:41.581 net/ngbe: not in enabled drivers build config 00:02:41.581 net/null: not in enabled drivers build config 00:02:41.581 net/octeontx: not in enabled drivers build config 00:02:41.581 net/octeon_ep: not in enabled drivers build config 00:02:41.581 net/pcap: not in enabled drivers build config 00:02:41.581 net/pfe: not in enabled drivers build config 00:02:41.581 net/qede: not in enabled drivers build config 00:02:41.581 net/ring: not in enabled drivers build config 00:02:41.581 net/sfc: not in enabled drivers build config 00:02:41.581 net/softnic: not in enabled drivers build config 00:02:41.581 net/tap: not in enabled drivers build config 00:02:41.581 net/thunderx: not in enabled drivers build config 00:02:41.581 net/txgbe: not in enabled drivers build config 00:02:41.581 net/vdev_netvsc: not in enabled drivers build config 00:02:41.581 net/vhost: not in enabled drivers build config 00:02:41.581 net/virtio: not in enabled drivers build config 00:02:41.581 net/vmxnet3: not in enabled drivers build config 00:02:41.581 raw/*: missing internal dependency, "rawdev" 00:02:41.581 crypto/armv8: not in enabled drivers build config 00:02:41.581 crypto/bcmfs: not in enabled drivers build config 00:02:41.581 crypto/caam_jr: not in enabled drivers build config 00:02:41.581 crypto/ccp: not in enabled drivers build config 00:02:41.581 crypto/cnxk: not in enabled drivers build config 00:02:41.581 crypto/dpaa_sec: not in enabled drivers build config 00:02:41.581 crypto/dpaa2_sec: not in enabled drivers build config 00:02:41.581 crypto/ipsec_mb: not in enabled drivers build config 00:02:41.581 crypto/mlx5: not in enabled drivers build config 00:02:41.581 crypto/mvsam: not in enabled drivers build config 00:02:41.581 crypto/nitrox: not in enabled drivers build config 00:02:41.581 crypto/null: not in enabled drivers build config 00:02:41.581 crypto/octeontx: not in enabled drivers build config 00:02:41.581 crypto/openssl: not in enabled drivers build config 00:02:41.581 crypto/scheduler: not in enabled drivers build config 00:02:41.581 crypto/uadk: not in enabled drivers build config 00:02:41.581 crypto/virtio: not in enabled drivers build config 00:02:41.581 compress/isal: not in enabled drivers build config 00:02:41.581 compress/mlx5: not in enabled drivers build config 00:02:41.581 compress/octeontx: not in enabled drivers build config 00:02:41.581 compress/zlib: not in enabled drivers build config 00:02:41.581 regex/*: missing internal dependency, "regexdev" 00:02:41.581 ml/*: missing internal dependency, "mldev" 00:02:41.581 vdpa/ifc: not in enabled drivers build config 00:02:41.581 vdpa/mlx5: not in enabled drivers build config 00:02:41.581 vdpa/nfp: not in enabled drivers build config 00:02:41.581 vdpa/sfc: not in enabled drivers build config 00:02:41.581 event/*: missing internal dependency, "eventdev" 00:02:41.581 baseband/*: missing internal dependency, "bbdev" 00:02:41.581 gpu/*: missing internal dependency, "gpudev" 00:02:41.581 00:02:41.581 00:02:41.581 Build targets in project: 84 00:02:41.581 00:02:41.581 DPDK 23.11.0 00:02:41.581 00:02:41.581 User defined options 00:02:41.581 buildtype : debug 00:02:41.581 default_library : shared 00:02:41.582 libdir : lib 00:02:41.582 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:41.582 b_sanitize : address 00:02:41.582 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:02:41.582 c_link_args : 00:02:41.582 cpu_instruction_set: native 00:02:41.582 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:41.582 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:41.582 enable_docs : false 00:02:41.582 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:41.582 enable_kmods : false 00:02:41.582 tests : false 00:02:41.582 00:02:41.582 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:41.582 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:41.839 [1/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:41.839 [2/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:41.839 [3/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:41.839 [4/264] Linking static target lib/librte_kvargs.a 00:02:41.839 [5/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:41.839 [6/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:41.839 [7/264] Linking static target lib/librte_log.a 00:02:41.839 [8/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:41.839 [9/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:41.839 [10/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:42.097 [11/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:42.097 [12/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:42.097 [13/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.097 [14/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:42.097 [15/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:42.354 [16/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:42.354 [17/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:42.354 [18/264] Linking static target lib/librte_telemetry.a 00:02:42.354 [19/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:42.354 [20/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:42.354 [21/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:42.610 [22/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:42.610 [23/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:42.610 [24/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:42.610 [25/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:42.610 [26/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.610 [27/264] Linking target lib/librte_log.so.24.0 00:02:42.610 [28/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:42.610 [29/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:42.610 [30/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:42.867 [31/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:42.867 [32/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:42.867 [33/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:42.867 [34/264] Linking target lib/librte_kvargs.so.24.0 00:02:42.867 [35/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:42.867 [36/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:42.867 [37/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:42.867 [38/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:42.867 [39/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:42.867 [40/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:42.867 [41/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:42.867 [42/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.867 [43/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:43.123 [44/264] Linking target lib/librte_telemetry.so.24.0 00:02:43.123 [45/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:43.123 [46/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:43.123 [47/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:43.123 [48/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:43.123 [49/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:43.380 [50/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:43.380 [51/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:43.380 [52/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:43.380 [53/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:43.380 [54/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:43.380 [55/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:43.380 [56/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:43.380 [57/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:43.380 [58/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:43.380 [59/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:43.637 [60/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:43.637 [61/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:43.637 [62/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:43.637 [63/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:43.637 [64/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:43.637 [65/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:43.894 [66/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:43.894 [67/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:43.894 [68/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:43.894 [69/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:43.894 [70/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:43.894 [71/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:43.894 [72/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:43.894 [73/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:43.894 [74/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:43.894 [75/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:43.894 [76/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:44.152 [77/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:44.152 [78/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:44.152 [79/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:44.152 [80/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:44.152 [81/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:44.152 [82/264] Linking static target lib/librte_ring.a 00:02:44.152 [83/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:44.409 [84/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:44.409 [85/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:44.409 [86/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:44.409 [87/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:44.409 [88/264] Linking static target lib/librte_rcu.a 00:02:44.409 [89/264] Linking static target lib/librte_eal.a 00:02:44.409 [90/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:44.667 [91/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:44.667 [92/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:44.667 [93/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:44.667 [94/264] Linking static target lib/librte_mempool.a 00:02:44.667 [95/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.667 [96/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:44.667 [97/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:44.924 [98/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.924 [99/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:44.924 [100/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:44.924 [101/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:45.181 [102/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:45.181 [103/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:45.181 [104/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:45.181 [105/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:45.181 [106/264] Linking static target lib/librte_meter.a 00:02:45.440 [107/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:45.440 [108/264] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:45.440 [109/264] Linking static target lib/librte_net.a 00:02:45.440 [110/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:45.440 [111/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:45.440 [112/264] Linking static target lib/librte_mbuf.a 00:02:45.440 [113/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:45.440 [114/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:45.440 [115/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.440 [116/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.699 [117/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.699 [118/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:45.699 [119/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:45.957 [120/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:45.957 [121/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:46.215 [122/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:46.215 [123/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:46.215 [124/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:46.215 [125/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.215 [126/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:46.215 [127/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:46.215 [128/264] Linking static target lib/librte_pci.a 00:02:46.215 [129/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:46.215 [130/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:46.215 [131/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:46.472 [132/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:46.472 [133/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:46.472 [134/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:46.472 [135/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:46.472 [136/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:46.473 [137/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:46.473 [138/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.473 [139/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:46.473 [140/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:46.473 [141/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:46.473 [142/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:46.473 [143/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:46.473 [144/264] Linking static target lib/librte_cmdline.a 00:02:46.731 [145/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:46.731 [146/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:46.731 [147/264] Linking static target lib/librte_timer.a 00:02:46.731 [148/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:46.731 [149/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:46.988 [150/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:46.988 [151/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:46.988 [152/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:47.246 [153/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:47.246 [154/264] Linking static target lib/librte_compressdev.a 00:02:47.246 [155/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:47.246 [156/264] Linking static target lib/librte_ethdev.a 00:02:47.246 [157/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:47.246 [158/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.246 [159/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:47.504 [160/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:47.504 [161/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:47.504 [162/264] Linking static target lib/librte_hash.a 00:02:47.504 [163/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:47.504 [164/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:47.504 [165/264] Linking static target lib/librte_dmadev.a 00:02:47.763 [166/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:47.763 [167/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:47.763 [168/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:47.763 [169/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:47.763 [170/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.763 [171/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.022 [172/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.022 [173/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:48.022 [174/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:48.022 [175/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:48.022 [176/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:48.022 [177/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:48.280 [178/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.280 [179/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:48.280 [180/264] Linking static target lib/librte_power.a 00:02:48.280 [181/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:48.280 [182/264] Linking static target lib/librte_cryptodev.a 00:02:48.280 [183/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:48.280 [184/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:48.280 [185/264] Linking static target lib/librte_reorder.a 00:02:48.538 [186/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:48.538 [187/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:48.538 [188/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:48.538 [189/264] Linking static target lib/librte_security.a 00:02:48.797 [190/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.797 [191/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:49.056 [192/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.056 [193/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.056 [194/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:49.056 [195/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:49.057 [196/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:49.318 [197/264] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:49.318 [198/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:49.318 [199/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:49.318 [200/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:49.318 [201/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:49.576 [202/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:49.576 [203/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:49.576 [204/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:49.576 [205/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.576 [206/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:49.576 [207/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:49.888 [208/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:49.888 [209/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:49.888 [210/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:49.888 [211/264] Linking static target drivers/librte_bus_vdev.a 00:02:49.888 [212/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:49.888 [213/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:49.888 [214/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:49.888 [215/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:49.888 [216/264] Linking static target drivers/librte_bus_pci.a 00:02:49.888 [217/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:49.888 [218/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.146 [219/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:50.146 [220/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:50.146 [221/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:50.146 [222/264] Linking static target drivers/librte_mempool_ring.a 00:02:50.146 [223/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.081 [224/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:51.648 [225/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.648 [226/264] Linking target lib/librte_eal.so.24.0 00:02:51.905 [227/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:51.905 [228/264] Linking target lib/librte_meter.so.24.0 00:02:51.905 [229/264] Linking target drivers/librte_bus_vdev.so.24.0 00:02:51.905 [230/264] Linking target lib/librte_timer.so.24.0 00:02:51.905 [231/264] Linking target lib/librte_pci.so.24.0 00:02:51.905 [232/264] Linking target lib/librte_ring.so.24.0 00:02:51.905 [233/264] Linking target lib/librte_dmadev.so.24.0 00:02:51.905 [234/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:51.905 [235/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:51.905 [236/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:51.906 [237/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:51.906 [238/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:51.906 [239/264] Linking target lib/librte_rcu.so.24.0 00:02:51.906 [240/264] Linking target drivers/librte_bus_pci.so.24.0 00:02:52.164 [241/264] Linking target lib/librte_mempool.so.24.0 00:02:52.164 [242/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:52.164 [243/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:52.164 [244/264] Linking target drivers/librte_mempool_ring.so.24.0 00:02:52.164 [245/264] Linking target lib/librte_mbuf.so.24.0 00:02:52.423 [246/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:52.423 [247/264] Linking target lib/librte_reorder.so.24.0 00:02:52.423 [248/264] Linking target lib/librte_compressdev.so.24.0 00:02:52.423 [249/264] Linking target lib/librte_net.so.24.0 00:02:52.423 [250/264] Linking target lib/librte_cryptodev.so.24.0 00:02:52.423 [251/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:52.423 [252/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:52.423 [253/264] Linking target lib/librte_security.so.24.0 00:02:52.423 [254/264] Linking target lib/librte_hash.so.24.0 00:02:52.423 [255/264] Linking target lib/librte_cmdline.so.24.0 00:02:52.682 [256/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:52.682 [257/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.682 [258/264] Linking target lib/librte_ethdev.so.24.0 00:02:52.682 [259/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:52.941 [260/264] Linking static target lib/librte_vhost.a 00:02:52.941 [261/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:52.941 [262/264] Linking target lib/librte_power.so.24.0 00:02:54.342 [263/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.342 [264/264] Linking target lib/librte_vhost.so.24.0 00:02:54.342 INFO: autodetecting backend as ninja 00:02:54.342 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:55.276 CC lib/ut_mock/mock.o 00:02:55.276 CC lib/ut/ut.o 00:02:55.276 CC lib/log/log.o 00:02:55.276 CC lib/log/log_deprecated.o 00:02:55.276 CC lib/log/log_flags.o 00:02:55.276 LIB libspdk_ut_mock.a 00:02:55.276 SO libspdk_ut_mock.so.5.0 00:02:55.276 LIB libspdk_ut.a 00:02:55.276 SO libspdk_ut.so.1.0 00:02:55.535 SYMLINK libspdk_ut_mock.so 00:02:55.535 LIB libspdk_log.a 00:02:55.535 SYMLINK libspdk_ut.so 00:02:55.535 SO libspdk_log.so.6.1 00:02:55.535 SYMLINK libspdk_log.so 00:02:55.535 CC lib/util/bit_array.o 00:02:55.535 CC lib/util/base64.o 00:02:55.535 CC lib/util/cpuset.o 00:02:55.535 CC lib/util/crc16.o 00:02:55.535 CC lib/dma/dma.o 00:02:55.535 CC lib/util/crc32c.o 00:02:55.535 CC lib/util/crc32.o 00:02:55.535 CC lib/ioat/ioat.o 00:02:55.535 CXX lib/trace_parser/trace.o 00:02:55.793 CC lib/vfio_user/host/vfio_user_pci.o 00:02:55.793 CC lib/util/crc32_ieee.o 00:02:55.793 CC lib/util/crc64.o 00:02:55.793 CC lib/util/dif.o 00:02:55.793 CC lib/util/fd.o 00:02:55.793 LIB libspdk_dma.a 00:02:55.793 SO libspdk_dma.so.3.0 00:02:55.793 CC lib/util/file.o 00:02:55.793 CC lib/util/hexlify.o 00:02:55.793 CC lib/util/iov.o 00:02:55.793 SYMLINK libspdk_dma.so 00:02:55.793 CC lib/util/math.o 00:02:55.793 CC lib/vfio_user/host/vfio_user.o 00:02:55.793 LIB libspdk_ioat.a 00:02:55.793 CC lib/util/pipe.o 00:02:55.794 CC lib/util/strerror_tls.o 00:02:56.054 SO libspdk_ioat.so.6.0 00:02:56.054 CC lib/util/string.o 00:02:56.054 SYMLINK libspdk_ioat.so 00:02:56.054 CC lib/util/uuid.o 00:02:56.054 CC lib/util/fd_group.o 00:02:56.054 CC lib/util/xor.o 00:02:56.054 CC lib/util/zipf.o 00:02:56.054 LIB libspdk_vfio_user.a 00:02:56.054 SO libspdk_vfio_user.so.4.0 00:02:56.054 SYMLINK libspdk_vfio_user.so 00:02:56.316 LIB libspdk_util.a 00:02:56.316 SO libspdk_util.so.8.0 00:02:56.575 LIB libspdk_trace_parser.a 00:02:56.575 SYMLINK libspdk_util.so 00:02:56.575 SO libspdk_trace_parser.so.4.0 00:02:56.575 CC lib/vmd/vmd.o 00:02:56.575 CC lib/json/json_parse.o 00:02:56.575 CC lib/json/json_write.o 00:02:56.575 CC lib/vmd/led.o 00:02:56.575 CC lib/json/json_util.o 00:02:56.575 CC lib/env_dpdk/env.o 00:02:56.575 SYMLINK libspdk_trace_parser.so 00:02:56.575 CC lib/rdma/common.o 00:02:56.575 CC lib/conf/conf.o 00:02:56.575 CC lib/idxd/idxd.o 00:02:56.575 CC lib/env_dpdk/memory.o 00:02:56.833 CC lib/env_dpdk/pci.o 00:02:56.833 CC lib/env_dpdk/init.o 00:02:56.833 LIB libspdk_conf.a 00:02:56.833 CC lib/idxd/idxd_user.o 00:02:56.833 SO libspdk_conf.so.5.0 00:02:56.833 CC lib/rdma/rdma_verbs.o 00:02:56.833 LIB libspdk_json.a 00:02:56.833 SO libspdk_json.so.5.1 00:02:56.833 SYMLINK libspdk_conf.so 00:02:56.833 CC lib/idxd/idxd_kernel.o 00:02:57.094 SYMLINK libspdk_json.so 00:02:57.094 CC lib/env_dpdk/threads.o 00:02:57.094 LIB libspdk_rdma.a 00:02:57.094 CC lib/env_dpdk/pci_ioat.o 00:02:57.094 SO libspdk_rdma.so.5.0 00:02:57.094 LIB libspdk_idxd.a 00:02:57.094 SO libspdk_idxd.so.11.0 00:02:57.094 CC lib/env_dpdk/pci_virtio.o 00:02:57.094 CC lib/env_dpdk/pci_vmd.o 00:02:57.094 SYMLINK libspdk_rdma.so 00:02:57.094 CC lib/env_dpdk/pci_idxd.o 00:02:57.094 CC lib/jsonrpc/jsonrpc_server.o 00:02:57.094 SYMLINK libspdk_idxd.so 00:02:57.094 CC lib/env_dpdk/pci_event.o 00:02:57.094 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:57.094 CC lib/env_dpdk/sigbus_handler.o 00:02:57.094 CC lib/env_dpdk/pci_dpdk.o 00:02:57.094 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:57.355 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:57.355 CC lib/jsonrpc/jsonrpc_client.o 00:02:57.355 LIB libspdk_vmd.a 00:02:57.355 SO libspdk_vmd.so.5.0 00:02:57.355 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:57.355 SYMLINK libspdk_vmd.so 00:02:57.616 LIB libspdk_jsonrpc.a 00:02:57.616 SO libspdk_jsonrpc.so.5.1 00:02:57.616 SYMLINK libspdk_jsonrpc.so 00:02:57.877 CC lib/rpc/rpc.o 00:02:57.877 LIB libspdk_rpc.a 00:02:57.877 SO libspdk_rpc.so.5.0 00:02:58.138 SYMLINK libspdk_rpc.so 00:02:58.138 LIB libspdk_env_dpdk.a 00:02:58.138 SO libspdk_env_dpdk.so.13.0 00:02:58.138 CC lib/sock/sock.o 00:02:58.138 CC lib/sock/sock_rpc.o 00:02:58.138 CC lib/trace/trace_flags.o 00:02:58.138 CC lib/trace/trace.o 00:02:58.138 CC lib/trace/trace_rpc.o 00:02:58.138 CC lib/notify/notify.o 00:02:58.138 CC lib/notify/notify_rpc.o 00:02:58.138 SYMLINK libspdk_env_dpdk.so 00:02:58.399 LIB libspdk_notify.a 00:02:58.399 SO libspdk_notify.so.5.0 00:02:58.399 LIB libspdk_trace.a 00:02:58.399 SYMLINK libspdk_notify.so 00:02:58.399 SO libspdk_trace.so.9.0 00:02:58.399 SYMLINK libspdk_trace.so 00:02:58.399 LIB libspdk_sock.a 00:02:58.657 SO libspdk_sock.so.8.0 00:02:58.657 CC lib/thread/thread.o 00:02:58.657 CC lib/thread/iobuf.o 00:02:58.657 SYMLINK libspdk_sock.so 00:02:58.657 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:58.657 CC lib/nvme/nvme_fabric.o 00:02:58.657 CC lib/nvme/nvme_ctrlr.o 00:02:58.657 CC lib/nvme/nvme_ns.o 00:02:58.657 CC lib/nvme/nvme_ns_cmd.o 00:02:58.657 CC lib/nvme/nvme_pcie_common.o 00:02:58.657 CC lib/nvme/nvme_pcie.o 00:02:58.657 CC lib/nvme/nvme_qpair.o 00:02:58.914 CC lib/nvme/nvme.o 00:02:59.481 CC lib/nvme/nvme_quirks.o 00:02:59.481 CC lib/nvme/nvme_transport.o 00:02:59.481 CC lib/nvme/nvme_discovery.o 00:02:59.481 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:59.481 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:59.481 CC lib/nvme/nvme_tcp.o 00:02:59.739 CC lib/nvme/nvme_opal.o 00:02:59.739 LIB libspdk_thread.a 00:02:59.739 CC lib/nvme/nvme_io_msg.o 00:02:59.739 SO libspdk_thread.so.9.0 00:02:59.739 SYMLINK libspdk_thread.so 00:02:59.739 CC lib/nvme/nvme_poll_group.o 00:02:59.739 CC lib/nvme/nvme_zns.o 00:02:59.998 CC lib/nvme/nvme_cuse.o 00:02:59.998 CC lib/nvme/nvme_vfio_user.o 00:02:59.998 CC lib/nvme/nvme_rdma.o 00:02:59.998 CC lib/accel/accel.o 00:03:00.257 CC lib/accel/accel_rpc.o 00:03:00.257 CC lib/accel/accel_sw.o 00:03:00.257 CC lib/blob/blobstore.o 00:03:00.257 CC lib/blob/request.o 00:03:00.257 CC lib/init/json_config.o 00:03:00.257 CC lib/init/subsystem.o 00:03:00.516 CC lib/virtio/virtio.o 00:03:00.516 CC lib/init/subsystem_rpc.o 00:03:00.516 CC lib/blob/zeroes.o 00:03:00.516 CC lib/init/rpc.o 00:03:00.775 CC lib/blob/blob_bs_dev.o 00:03:00.775 CC lib/virtio/virtio_vhost_user.o 00:03:00.775 CC lib/virtio/virtio_vfio_user.o 00:03:00.775 LIB libspdk_init.a 00:03:00.775 CC lib/virtio/virtio_pci.o 00:03:00.775 SO libspdk_init.so.4.0 00:03:00.775 SYMLINK libspdk_init.so 00:03:01.033 CC lib/event/log_rpc.o 00:03:01.033 CC lib/event/reactor.o 00:03:01.033 CC lib/event/app.o 00:03:01.033 CC lib/event/app_rpc.o 00:03:01.033 CC lib/event/scheduler_static.o 00:03:01.033 LIB libspdk_virtio.a 00:03:01.033 SO libspdk_virtio.so.6.0 00:03:01.033 LIB libspdk_accel.a 00:03:01.033 SYMLINK libspdk_virtio.so 00:03:01.033 SO libspdk_accel.so.14.0 00:03:01.320 SYMLINK libspdk_accel.so 00:03:01.320 LIB libspdk_nvme.a 00:03:01.320 LIB libspdk_event.a 00:03:01.320 CC lib/bdev/bdev.o 00:03:01.320 CC lib/bdev/bdev_rpc.o 00:03:01.320 CC lib/bdev/bdev_zone.o 00:03:01.320 CC lib/bdev/part.o 00:03:01.320 CC lib/bdev/scsi_nvme.o 00:03:01.320 SO libspdk_event.so.12.0 00:03:01.320 SO libspdk_nvme.so.12.0 00:03:01.582 SYMLINK libspdk_event.so 00:03:01.582 SYMLINK libspdk_nvme.so 00:03:03.494 LIB libspdk_blob.a 00:03:03.494 SO libspdk_blob.so.10.1 00:03:03.494 SYMLINK libspdk_blob.so 00:03:03.752 CC lib/blobfs/tree.o 00:03:03.752 CC lib/blobfs/blobfs.o 00:03:03.753 CC lib/lvol/lvol.o 00:03:04.013 LIB libspdk_bdev.a 00:03:04.013 SO libspdk_bdev.so.14.0 00:03:04.271 SYMLINK libspdk_bdev.so 00:03:04.271 CC lib/nbd/nbd.o 00:03:04.271 CC lib/nbd/nbd_rpc.o 00:03:04.271 CC lib/ublk/ublk.o 00:03:04.271 CC lib/scsi/dev.o 00:03:04.271 CC lib/ublk/ublk_rpc.o 00:03:04.271 CC lib/scsi/lun.o 00:03:04.271 CC lib/ftl/ftl_core.o 00:03:04.271 CC lib/nvmf/ctrlr.o 00:03:04.530 CC lib/nvmf/ctrlr_discovery.o 00:03:04.530 CC lib/nvmf/ctrlr_bdev.o 00:03:04.530 LIB libspdk_blobfs.a 00:03:04.530 CC lib/nvmf/subsystem.o 00:03:04.530 SO libspdk_blobfs.so.9.0 00:03:04.530 LIB libspdk_lvol.a 00:03:04.530 SYMLINK libspdk_blobfs.so 00:03:04.530 CC lib/scsi/port.o 00:03:04.530 SO libspdk_lvol.so.9.1 00:03:04.788 CC lib/scsi/scsi.o 00:03:04.788 SYMLINK libspdk_lvol.so 00:03:04.788 CC lib/scsi/scsi_bdev.o 00:03:04.788 CC lib/ftl/ftl_init.o 00:03:04.788 LIB libspdk_nbd.a 00:03:04.788 SO libspdk_nbd.so.6.0 00:03:04.788 CC lib/scsi/scsi_pr.o 00:03:04.788 CC lib/ftl/ftl_layout.o 00:03:04.788 SYMLINK libspdk_nbd.so 00:03:04.788 CC lib/nvmf/nvmf.o 00:03:04.788 CC lib/ftl/ftl_debug.o 00:03:05.050 CC lib/ftl/ftl_io.o 00:03:05.050 LIB libspdk_ublk.a 00:03:05.050 SO libspdk_ublk.so.2.0 00:03:05.050 CC lib/ftl/ftl_sb.o 00:03:05.050 SYMLINK libspdk_ublk.so 00:03:05.050 CC lib/scsi/scsi_rpc.o 00:03:05.050 CC lib/ftl/ftl_l2p.o 00:03:05.050 CC lib/scsi/task.o 00:03:05.050 CC lib/nvmf/nvmf_rpc.o 00:03:05.311 CC lib/nvmf/transport.o 00:03:05.311 CC lib/nvmf/tcp.o 00:03:05.311 CC lib/ftl/ftl_l2p_flat.o 00:03:05.311 CC lib/ftl/ftl_nv_cache.o 00:03:05.311 CC lib/nvmf/rdma.o 00:03:05.311 LIB libspdk_scsi.a 00:03:05.311 SO libspdk_scsi.so.8.0 00:03:05.311 CC lib/ftl/ftl_band.o 00:03:05.572 SYMLINK libspdk_scsi.so 00:03:05.572 CC lib/iscsi/conn.o 00:03:05.831 CC lib/iscsi/init_grp.o 00:03:05.831 CC lib/iscsi/iscsi.o 00:03:05.831 CC lib/iscsi/md5.o 00:03:05.831 CC lib/iscsi/param.o 00:03:05.831 CC lib/iscsi/portal_grp.o 00:03:06.089 CC lib/iscsi/tgt_node.o 00:03:06.089 CC lib/vhost/vhost.o 00:03:06.089 CC lib/vhost/vhost_rpc.o 00:03:06.089 CC lib/vhost/vhost_scsi.o 00:03:06.089 CC lib/ftl/ftl_band_ops.o 00:03:06.089 CC lib/ftl/ftl_writer.o 00:03:06.089 CC lib/ftl/ftl_rq.o 00:03:06.347 CC lib/vhost/vhost_blk.o 00:03:06.347 CC lib/vhost/rte_vhost_user.o 00:03:06.347 CC lib/ftl/ftl_reloc.o 00:03:06.347 CC lib/iscsi/iscsi_subsystem.o 00:03:06.606 CC lib/iscsi/iscsi_rpc.o 00:03:06.606 CC lib/iscsi/task.o 00:03:06.606 CC lib/ftl/ftl_l2p_cache.o 00:03:06.606 CC lib/ftl/ftl_p2l.o 00:03:06.864 CC lib/ftl/mngt/ftl_mngt.o 00:03:06.864 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:06.864 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:06.864 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:06.864 LIB libspdk_iscsi.a 00:03:06.864 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:06.864 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:06.864 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:06.864 SO libspdk_iscsi.so.7.0 00:03:06.864 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:07.122 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:07.122 SYMLINK libspdk_iscsi.so 00:03:07.122 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:07.122 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:07.122 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:07.122 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:07.122 CC lib/ftl/utils/ftl_conf.o 00:03:07.122 LIB libspdk_vhost.a 00:03:07.122 CC lib/ftl/utils/ftl_md.o 00:03:07.122 CC lib/ftl/utils/ftl_mempool.o 00:03:07.122 CC lib/ftl/utils/ftl_bitmap.o 00:03:07.122 SO libspdk_vhost.so.7.1 00:03:07.381 CC lib/ftl/utils/ftl_property.o 00:03:07.381 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:07.381 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:07.381 SYMLINK libspdk_vhost.so 00:03:07.381 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:07.381 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:07.381 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:07.381 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:07.381 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:07.381 LIB libspdk_nvmf.a 00:03:07.381 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:07.381 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:07.381 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:07.381 CC lib/ftl/base/ftl_base_dev.o 00:03:07.381 CC lib/ftl/base/ftl_base_bdev.o 00:03:07.381 CC lib/ftl/ftl_trace.o 00:03:07.640 SO libspdk_nvmf.so.17.0 00:03:07.640 LIB libspdk_ftl.a 00:03:07.640 SYMLINK libspdk_nvmf.so 00:03:07.907 SO libspdk_ftl.so.8.0 00:03:07.907 SYMLINK libspdk_ftl.so 00:03:08.166 CC module/env_dpdk/env_dpdk_rpc.o 00:03:08.166 CC module/accel/ioat/accel_ioat.o 00:03:08.166 CC module/blob/bdev/blob_bdev.o 00:03:08.166 CC module/accel/iaa/accel_iaa.o 00:03:08.166 CC module/sock/posix/posix.o 00:03:08.166 CC module/accel/dsa/accel_dsa.o 00:03:08.166 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:08.166 CC module/accel/error/accel_error.o 00:03:08.166 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:08.166 CC module/scheduler/gscheduler/gscheduler.o 00:03:08.166 LIB libspdk_env_dpdk_rpc.a 00:03:08.166 SO libspdk_env_dpdk_rpc.so.5.0 00:03:08.423 SYMLINK libspdk_env_dpdk_rpc.so 00:03:08.423 CC module/accel/error/accel_error_rpc.o 00:03:08.423 CC module/accel/ioat/accel_ioat_rpc.o 00:03:08.423 LIB libspdk_scheduler_gscheduler.a 00:03:08.423 CC module/accel/iaa/accel_iaa_rpc.o 00:03:08.423 LIB libspdk_scheduler_dpdk_governor.a 00:03:08.423 SO libspdk_scheduler_gscheduler.so.3.0 00:03:08.423 SO libspdk_scheduler_dpdk_governor.so.3.0 00:03:08.423 LIB libspdk_scheduler_dynamic.a 00:03:08.423 CC module/accel/dsa/accel_dsa_rpc.o 00:03:08.423 LIB libspdk_blob_bdev.a 00:03:08.423 SYMLINK libspdk_scheduler_gscheduler.so 00:03:08.423 SO libspdk_scheduler_dynamic.so.3.0 00:03:08.423 LIB libspdk_accel_error.a 00:03:08.423 SO libspdk_blob_bdev.so.10.1 00:03:08.423 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:08.423 LIB libspdk_accel_ioat.a 00:03:08.423 SO libspdk_accel_error.so.1.0 00:03:08.423 SYMLINK libspdk_scheduler_dynamic.so 00:03:08.423 SO libspdk_accel_ioat.so.5.0 00:03:08.423 SYMLINK libspdk_blob_bdev.so 00:03:08.423 LIB libspdk_accel_iaa.a 00:03:08.423 SYMLINK libspdk_accel_error.so 00:03:08.423 LIB libspdk_accel_dsa.a 00:03:08.423 SO libspdk_accel_iaa.so.2.0 00:03:08.423 SYMLINK libspdk_accel_ioat.so 00:03:08.423 SO libspdk_accel_dsa.so.4.0 00:03:08.423 SYMLINK libspdk_accel_iaa.so 00:03:08.681 SYMLINK libspdk_accel_dsa.so 00:03:08.681 CC module/bdev/gpt/gpt.o 00:03:08.681 CC module/bdev/null/bdev_null.o 00:03:08.681 CC module/bdev/delay/vbdev_delay.o 00:03:08.681 CC module/blobfs/bdev/blobfs_bdev.o 00:03:08.681 CC module/bdev/error/vbdev_error.o 00:03:08.681 CC module/bdev/malloc/bdev_malloc.o 00:03:08.681 CC module/bdev/lvol/vbdev_lvol.o 00:03:08.681 CC module/bdev/nvme/bdev_nvme.o 00:03:08.681 CC module/bdev/passthru/vbdev_passthru.o 00:03:08.681 CC module/bdev/gpt/vbdev_gpt.o 00:03:08.681 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:08.681 CC module/bdev/null/bdev_null_rpc.o 00:03:08.939 CC module/bdev/error/vbdev_error_rpc.o 00:03:08.939 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:08.939 LIB libspdk_sock_posix.a 00:03:08.939 LIB libspdk_bdev_null.a 00:03:08.939 LIB libspdk_blobfs_bdev.a 00:03:08.939 SO libspdk_bdev_null.so.5.0 00:03:08.939 SO libspdk_sock_posix.so.5.0 00:03:08.939 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:08.939 SO libspdk_blobfs_bdev.so.5.0 00:03:08.939 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:08.939 SYMLINK libspdk_bdev_null.so 00:03:08.939 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:08.939 SYMLINK libspdk_sock_posix.so 00:03:08.939 SYMLINK libspdk_blobfs_bdev.so 00:03:08.939 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:08.939 CC module/bdev/nvme/nvme_rpc.o 00:03:08.939 LIB libspdk_bdev_error.a 00:03:08.939 LIB libspdk_bdev_gpt.a 00:03:08.939 LIB libspdk_bdev_malloc.a 00:03:08.939 SO libspdk_bdev_error.so.5.0 00:03:08.939 SO libspdk_bdev_gpt.so.5.0 00:03:08.939 SO libspdk_bdev_malloc.so.5.0 00:03:08.939 CC module/bdev/raid/bdev_raid.o 00:03:09.196 LIB libspdk_bdev_passthru.a 00:03:09.196 SYMLINK libspdk_bdev_error.so 00:03:09.196 SO libspdk_bdev_passthru.so.5.0 00:03:09.197 LIB libspdk_bdev_delay.a 00:03:09.197 SYMLINK libspdk_bdev_gpt.so 00:03:09.197 CC module/bdev/raid/bdev_raid_rpc.o 00:03:09.197 SYMLINK libspdk_bdev_malloc.so 00:03:09.197 CC module/bdev/raid/bdev_raid_sb.o 00:03:09.197 SO libspdk_bdev_delay.so.5.0 00:03:09.197 SYMLINK libspdk_bdev_passthru.so 00:03:09.197 CC module/bdev/raid/raid0.o 00:03:09.197 SYMLINK libspdk_bdev_delay.so 00:03:09.197 CC module/bdev/raid/raid1.o 00:03:09.197 CC module/bdev/split/vbdev_split.o 00:03:09.197 CC module/bdev/raid/concat.o 00:03:09.197 LIB libspdk_bdev_lvol.a 00:03:09.197 SO libspdk_bdev_lvol.so.5.0 00:03:09.197 SYMLINK libspdk_bdev_lvol.so 00:03:09.454 CC module/bdev/xnvme/bdev_xnvme.o 00:03:09.454 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:09.454 CC module/bdev/ftl/bdev_ftl.o 00:03:09.454 CC module/bdev/aio/bdev_aio.o 00:03:09.454 CC module/bdev/aio/bdev_aio_rpc.o 00:03:09.454 CC module/bdev/iscsi/bdev_iscsi.o 00:03:09.454 CC module/bdev/split/vbdev_split_rpc.o 00:03:09.454 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:09.710 LIB libspdk_bdev_split.a 00:03:09.710 SO libspdk_bdev_split.so.5.0 00:03:09.710 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:09.710 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:09.710 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:09.710 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:09.710 SYMLINK libspdk_bdev_split.so 00:03:09.710 CC module/bdev/nvme/bdev_mdns_client.o 00:03:09.710 LIB libspdk_bdev_xnvme.a 00:03:09.710 CC module/bdev/nvme/vbdev_opal.o 00:03:09.710 LIB libspdk_bdev_iscsi.a 00:03:09.710 SO libspdk_bdev_xnvme.so.2.0 00:03:09.710 SO libspdk_bdev_iscsi.so.5.0 00:03:09.710 LIB libspdk_bdev_aio.a 00:03:09.710 LIB libspdk_bdev_zone_block.a 00:03:09.710 SO libspdk_bdev_aio.so.5.0 00:03:09.710 SO libspdk_bdev_zone_block.so.5.0 00:03:09.710 LIB libspdk_bdev_ftl.a 00:03:09.710 SYMLINK libspdk_bdev_iscsi.so 00:03:09.710 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:09.710 SYMLINK libspdk_bdev_xnvme.so 00:03:09.710 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:09.710 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:09.710 SO libspdk_bdev_ftl.so.5.0 00:03:09.710 SYMLINK libspdk_bdev_zone_block.so 00:03:09.710 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:09.710 SYMLINK libspdk_bdev_aio.so 00:03:09.969 SYMLINK libspdk_bdev_ftl.so 00:03:09.969 LIB libspdk_bdev_raid.a 00:03:09.969 SO libspdk_bdev_raid.so.5.0 00:03:09.969 SYMLINK libspdk_bdev_raid.so 00:03:10.228 LIB libspdk_bdev_virtio.a 00:03:10.228 SO libspdk_bdev_virtio.so.5.0 00:03:10.228 SYMLINK libspdk_bdev_virtio.so 00:03:10.798 LIB libspdk_bdev_nvme.a 00:03:10.798 SO libspdk_bdev_nvme.so.6.0 00:03:10.798 SYMLINK libspdk_bdev_nvme.so 00:03:11.365 CC module/event/subsystems/iobuf/iobuf.o 00:03:11.365 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:11.365 CC module/event/subsystems/sock/sock.o 00:03:11.365 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:11.365 CC module/event/subsystems/scheduler/scheduler.o 00:03:11.365 CC module/event/subsystems/vmd/vmd.o 00:03:11.365 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:11.365 LIB libspdk_event_sock.a 00:03:11.365 LIB libspdk_event_vhost_blk.a 00:03:11.365 LIB libspdk_event_iobuf.a 00:03:11.365 LIB libspdk_event_scheduler.a 00:03:11.365 LIB libspdk_event_vmd.a 00:03:11.365 SO libspdk_event_sock.so.4.0 00:03:11.365 SO libspdk_event_vhost_blk.so.2.0 00:03:11.365 SO libspdk_event_iobuf.so.2.0 00:03:11.365 SO libspdk_event_scheduler.so.3.0 00:03:11.365 SO libspdk_event_vmd.so.5.0 00:03:11.365 SYMLINK libspdk_event_vhost_blk.so 00:03:11.365 SYMLINK libspdk_event_sock.so 00:03:11.365 SYMLINK libspdk_event_iobuf.so 00:03:11.365 SYMLINK libspdk_event_scheduler.so 00:03:11.365 SYMLINK libspdk_event_vmd.so 00:03:11.622 CC module/event/subsystems/accel/accel.o 00:03:11.622 LIB libspdk_event_accel.a 00:03:11.622 SO libspdk_event_accel.so.5.0 00:03:11.622 SYMLINK libspdk_event_accel.so 00:03:11.880 CC module/event/subsystems/bdev/bdev.o 00:03:11.880 LIB libspdk_event_bdev.a 00:03:11.880 SO libspdk_event_bdev.so.5.0 00:03:12.138 SYMLINK libspdk_event_bdev.so 00:03:12.138 CC module/event/subsystems/nbd/nbd.o 00:03:12.138 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:12.138 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:12.138 CC module/event/subsystems/scsi/scsi.o 00:03:12.138 CC module/event/subsystems/ublk/ublk.o 00:03:12.138 LIB libspdk_event_nbd.a 00:03:12.138 SO libspdk_event_nbd.so.5.0 00:03:12.138 LIB libspdk_event_scsi.a 00:03:12.138 LIB libspdk_event_ublk.a 00:03:12.398 SO libspdk_event_scsi.so.5.0 00:03:12.398 SO libspdk_event_ublk.so.2.0 00:03:12.398 SYMLINK libspdk_event_nbd.so 00:03:12.398 LIB libspdk_event_nvmf.a 00:03:12.398 SYMLINK libspdk_event_scsi.so 00:03:12.398 SYMLINK libspdk_event_ublk.so 00:03:12.398 SO libspdk_event_nvmf.so.5.0 00:03:12.398 SYMLINK libspdk_event_nvmf.so 00:03:12.398 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:12.398 CC module/event/subsystems/iscsi/iscsi.o 00:03:12.656 LIB libspdk_event_vhost_scsi.a 00:03:12.656 LIB libspdk_event_iscsi.a 00:03:12.656 SO libspdk_event_vhost_scsi.so.2.0 00:03:12.656 SO libspdk_event_iscsi.so.5.0 00:03:12.656 SYMLINK libspdk_event_vhost_scsi.so 00:03:12.656 SYMLINK libspdk_event_iscsi.so 00:03:12.656 SO libspdk.so.5.0 00:03:12.656 SYMLINK libspdk.so 00:03:12.914 CC app/spdk_lspci/spdk_lspci.o 00:03:12.914 CC app/trace_record/trace_record.o 00:03:12.914 CXX app/trace/trace.o 00:03:12.914 CC app/spdk_nvme_perf/perf.o 00:03:12.914 CC app/iscsi_tgt/iscsi_tgt.o 00:03:12.914 CC app/nvmf_tgt/nvmf_main.o 00:03:12.914 CC examples/accel/perf/accel_perf.o 00:03:12.914 CC test/accel/dif/dif.o 00:03:12.914 CC app/spdk_tgt/spdk_tgt.o 00:03:12.914 CC examples/bdev/hello_world/hello_bdev.o 00:03:12.914 LINK spdk_lspci 00:03:12.914 LINK nvmf_tgt 00:03:13.172 LINK spdk_trace_record 00:03:13.172 LINK iscsi_tgt 00:03:13.172 LINK spdk_tgt 00:03:13.172 CC app/spdk_nvme_identify/identify.o 00:03:13.172 LINK hello_bdev 00:03:13.172 LINK spdk_trace 00:03:13.172 LINK dif 00:03:13.172 LINK accel_perf 00:03:13.172 CC test/app/bdev_svc/bdev_svc.o 00:03:13.172 CC app/spdk_nvme_discover/discovery_aer.o 00:03:13.172 CC test/bdev/bdevio/bdevio.o 00:03:13.432 CC test/blobfs/mkfs/mkfs.o 00:03:13.432 TEST_HEADER include/spdk/accel.h 00:03:13.432 TEST_HEADER include/spdk/accel_module.h 00:03:13.432 TEST_HEADER include/spdk/assert.h 00:03:13.432 TEST_HEADER include/spdk/barrier.h 00:03:13.432 TEST_HEADER include/spdk/base64.h 00:03:13.432 TEST_HEADER include/spdk/bdev.h 00:03:13.432 TEST_HEADER include/spdk/bdev_module.h 00:03:13.432 TEST_HEADER include/spdk/bdev_zone.h 00:03:13.432 TEST_HEADER include/spdk/bit_array.h 00:03:13.432 TEST_HEADER include/spdk/bit_pool.h 00:03:13.432 CC examples/bdev/bdevperf/bdevperf.o 00:03:13.432 TEST_HEADER include/spdk/blob_bdev.h 00:03:13.432 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:13.432 TEST_HEADER include/spdk/blobfs.h 00:03:13.432 TEST_HEADER include/spdk/blob.h 00:03:13.432 TEST_HEADER include/spdk/conf.h 00:03:13.432 TEST_HEADER include/spdk/config.h 00:03:13.432 TEST_HEADER include/spdk/cpuset.h 00:03:13.432 TEST_HEADER include/spdk/crc16.h 00:03:13.432 TEST_HEADER include/spdk/crc32.h 00:03:13.432 TEST_HEADER include/spdk/crc64.h 00:03:13.432 TEST_HEADER include/spdk/dif.h 00:03:13.432 TEST_HEADER include/spdk/dma.h 00:03:13.432 TEST_HEADER include/spdk/endian.h 00:03:13.432 TEST_HEADER include/spdk/env_dpdk.h 00:03:13.432 TEST_HEADER include/spdk/env.h 00:03:13.432 TEST_HEADER include/spdk/event.h 00:03:13.432 TEST_HEADER include/spdk/fd_group.h 00:03:13.432 TEST_HEADER include/spdk/fd.h 00:03:13.432 TEST_HEADER include/spdk/file.h 00:03:13.432 CC examples/blob/hello_world/hello_blob.o 00:03:13.432 TEST_HEADER include/spdk/ftl.h 00:03:13.432 TEST_HEADER include/spdk/gpt_spec.h 00:03:13.432 TEST_HEADER include/spdk/hexlify.h 00:03:13.432 TEST_HEADER include/spdk/histogram_data.h 00:03:13.432 TEST_HEADER include/spdk/idxd.h 00:03:13.432 LINK bdev_svc 00:03:13.432 TEST_HEADER include/spdk/idxd_spec.h 00:03:13.432 TEST_HEADER include/spdk/init.h 00:03:13.432 TEST_HEADER include/spdk/ioat.h 00:03:13.432 TEST_HEADER include/spdk/ioat_spec.h 00:03:13.432 TEST_HEADER include/spdk/iscsi_spec.h 00:03:13.432 TEST_HEADER include/spdk/json.h 00:03:13.432 TEST_HEADER include/spdk/jsonrpc.h 00:03:13.432 TEST_HEADER include/spdk/likely.h 00:03:13.432 TEST_HEADER include/spdk/log.h 00:03:13.432 TEST_HEADER include/spdk/lvol.h 00:03:13.432 TEST_HEADER include/spdk/memory.h 00:03:13.432 TEST_HEADER include/spdk/mmio.h 00:03:13.432 TEST_HEADER include/spdk/nbd.h 00:03:13.432 TEST_HEADER include/spdk/notify.h 00:03:13.432 TEST_HEADER include/spdk/nvme.h 00:03:13.432 TEST_HEADER include/spdk/nvme_intel.h 00:03:13.432 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:13.432 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:13.432 TEST_HEADER include/spdk/nvme_spec.h 00:03:13.432 TEST_HEADER include/spdk/nvme_zns.h 00:03:13.432 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:13.432 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:13.432 TEST_HEADER include/spdk/nvmf.h 00:03:13.432 TEST_HEADER include/spdk/nvmf_spec.h 00:03:13.432 TEST_HEADER include/spdk/nvmf_transport.h 00:03:13.432 TEST_HEADER include/spdk/opal.h 00:03:13.432 TEST_HEADER include/spdk/opal_spec.h 00:03:13.432 TEST_HEADER include/spdk/pci_ids.h 00:03:13.432 TEST_HEADER include/spdk/pipe.h 00:03:13.432 TEST_HEADER include/spdk/queue.h 00:03:13.432 TEST_HEADER include/spdk/reduce.h 00:03:13.432 TEST_HEADER include/spdk/rpc.h 00:03:13.432 TEST_HEADER include/spdk/scheduler.h 00:03:13.432 TEST_HEADER include/spdk/scsi.h 00:03:13.432 TEST_HEADER include/spdk/scsi_spec.h 00:03:13.432 TEST_HEADER include/spdk/sock.h 00:03:13.432 TEST_HEADER include/spdk/stdinc.h 00:03:13.432 TEST_HEADER include/spdk/string.h 00:03:13.432 TEST_HEADER include/spdk/thread.h 00:03:13.432 LINK mkfs 00:03:13.432 TEST_HEADER include/spdk/trace.h 00:03:13.432 TEST_HEADER include/spdk/trace_parser.h 00:03:13.432 LINK spdk_nvme_discover 00:03:13.432 TEST_HEADER include/spdk/tree.h 00:03:13.432 TEST_HEADER include/spdk/ublk.h 00:03:13.432 TEST_HEADER include/spdk/util.h 00:03:13.432 TEST_HEADER include/spdk/uuid.h 00:03:13.432 CC test/dma/test_dma/test_dma.o 00:03:13.432 TEST_HEADER include/spdk/version.h 00:03:13.432 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:13.432 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:13.432 TEST_HEADER include/spdk/vhost.h 00:03:13.432 TEST_HEADER include/spdk/vmd.h 00:03:13.432 TEST_HEADER include/spdk/xor.h 00:03:13.432 LINK spdk_nvme_perf 00:03:13.432 TEST_HEADER include/spdk/zipf.h 00:03:13.432 CXX test/cpp_headers/accel.o 00:03:13.743 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:13.743 LINK hello_blob 00:03:13.743 CXX test/cpp_headers/accel_module.o 00:03:13.743 CXX test/cpp_headers/assert.o 00:03:13.743 CXX test/cpp_headers/barrier.o 00:03:13.743 LINK bdevio 00:03:13.743 CC app/spdk_top/spdk_top.o 00:03:13.743 LINK test_dma 00:03:13.743 CXX test/cpp_headers/base64.o 00:03:13.743 CC examples/blob/cli/blobcli.o 00:03:13.743 CC app/vhost/vhost.o 00:03:13.743 CC app/spdk_dd/spdk_dd.o 00:03:13.743 CXX test/cpp_headers/bdev.o 00:03:14.025 LINK spdk_nvme_identify 00:03:14.025 CC test/env/vtophys/vtophys.o 00:03:14.025 LINK vhost 00:03:14.025 CC test/env/mem_callbacks/mem_callbacks.o 00:03:14.025 CXX test/cpp_headers/bdev_module.o 00:03:14.025 LINK nvme_fuzz 00:03:14.025 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:14.025 LINK vtophys 00:03:14.025 LINK spdk_dd 00:03:14.025 LINK bdevperf 00:03:14.025 LINK env_dpdk_post_init 00:03:14.025 CXX test/cpp_headers/bdev_zone.o 00:03:14.025 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:14.025 CC test/env/memory/memory_ut.o 00:03:14.284 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:14.284 LINK blobcli 00:03:14.284 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:14.284 CXX test/cpp_headers/bit_array.o 00:03:14.284 LINK mem_callbacks 00:03:14.284 CC test/event/event_perf/event_perf.o 00:03:14.284 CXX test/cpp_headers/bit_pool.o 00:03:14.284 CC test/lvol/esnap/esnap.o 00:03:14.542 CC test/nvme/aer/aer.o 00:03:14.542 LINK event_perf 00:03:14.542 CC examples/ioat/perf/perf.o 00:03:14.542 CXX test/cpp_headers/blob_bdev.o 00:03:14.542 CC app/fio/nvme/fio_plugin.o 00:03:14.542 LINK spdk_top 00:03:14.542 CC test/event/reactor/reactor.o 00:03:14.542 LINK vhost_fuzz 00:03:14.542 CXX test/cpp_headers/blobfs_bdev.o 00:03:14.542 CXX test/cpp_headers/blobfs.o 00:03:14.801 LINK ioat_perf 00:03:14.801 LINK aer 00:03:14.801 CXX test/cpp_headers/blob.o 00:03:14.801 LINK memory_ut 00:03:14.801 LINK reactor 00:03:14.801 CC examples/ioat/verify/verify.o 00:03:14.801 CC test/event/reactor_perf/reactor_perf.o 00:03:14.801 CC test/app/histogram_perf/histogram_perf.o 00:03:14.801 CXX test/cpp_headers/conf.o 00:03:14.801 CC test/nvme/reset/reset.o 00:03:15.059 CC test/env/pci/pci_ut.o 00:03:15.059 CC test/app/jsoncat/jsoncat.o 00:03:15.059 LINK reactor_perf 00:03:15.059 CXX test/cpp_headers/config.o 00:03:15.059 LINK histogram_perf 00:03:15.059 LINK spdk_nvme 00:03:15.059 CXX test/cpp_headers/cpuset.o 00:03:15.059 LINK verify 00:03:15.059 LINK reset 00:03:15.059 LINK jsoncat 00:03:15.059 CC test/event/app_repeat/app_repeat.o 00:03:15.059 CXX test/cpp_headers/crc16.o 00:03:15.059 CC app/fio/bdev/fio_plugin.o 00:03:15.317 CC test/event/scheduler/scheduler.o 00:03:15.317 CC test/nvme/sgl/sgl.o 00:03:15.317 CC examples/nvme/hello_world/hello_world.o 00:03:15.317 LINK app_repeat 00:03:15.317 CXX test/cpp_headers/crc32.o 00:03:15.317 CC examples/sock/hello_world/hello_sock.o 00:03:15.317 LINK pci_ut 00:03:15.317 LINK scheduler 00:03:15.317 CXX test/cpp_headers/crc64.o 00:03:15.317 CC test/nvme/e2edp/nvme_dp.o 00:03:15.576 LINK hello_world 00:03:15.576 LINK sgl 00:03:15.576 CXX test/cpp_headers/dif.o 00:03:15.576 LINK hello_sock 00:03:15.576 LINK spdk_bdev 00:03:15.576 CC test/rpc_client/rpc_client_test.o 00:03:15.576 CXX test/cpp_headers/dma.o 00:03:15.576 LINK nvme_dp 00:03:15.576 CC examples/nvme/reconnect/reconnect.o 00:03:15.576 CC test/thread/poller_perf/poller_perf.o 00:03:15.576 CXX test/cpp_headers/endian.o 00:03:15.576 CXX test/cpp_headers/env_dpdk.o 00:03:15.834 CC test/nvme/overhead/overhead.o 00:03:15.834 CXX test/cpp_headers/env.o 00:03:15.834 LINK rpc_client_test 00:03:15.835 LINK poller_perf 00:03:15.835 CC examples/vmd/lsvmd/lsvmd.o 00:03:15.835 LINK iscsi_fuzz 00:03:15.835 CC examples/vmd/led/led.o 00:03:15.835 CXX test/cpp_headers/event.o 00:03:15.835 CXX test/cpp_headers/fd_group.o 00:03:15.835 LINK reconnect 00:03:15.835 CC examples/nvmf/nvmf/nvmf.o 00:03:15.835 CC test/nvme/err_injection/err_injection.o 00:03:15.835 LINK led 00:03:15.835 LINK lsvmd 00:03:15.835 LINK overhead 00:03:16.093 CXX test/cpp_headers/fd.o 00:03:16.093 CC examples/nvme/arbitration/arbitration.o 00:03:16.093 CC test/app/stub/stub.o 00:03:16.093 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:16.093 LINK err_injection 00:03:16.093 CC examples/nvme/hotplug/hotplug.o 00:03:16.093 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:16.093 CXX test/cpp_headers/file.o 00:03:16.093 CC examples/util/zipf/zipf.o 00:03:16.093 LINK nvmf 00:03:16.093 LINK stub 00:03:16.093 CXX test/cpp_headers/ftl.o 00:03:16.352 CC test/nvme/startup/startup.o 00:03:16.352 LINK arbitration 00:03:16.352 LINK cmb_copy 00:03:16.352 LINK hotplug 00:03:16.352 LINK zipf 00:03:16.352 CXX test/cpp_headers/gpt_spec.o 00:03:16.352 CXX test/cpp_headers/hexlify.o 00:03:16.352 LINK startup 00:03:16.352 CC test/nvme/reserve/reserve.o 00:03:16.352 CXX test/cpp_headers/histogram_data.o 00:03:16.352 CXX test/cpp_headers/idxd.o 00:03:16.352 LINK nvme_manage 00:03:16.352 CXX test/cpp_headers/idxd_spec.o 00:03:16.352 CC test/nvme/simple_copy/simple_copy.o 00:03:16.352 CXX test/cpp_headers/init.o 00:03:16.611 CC test/nvme/connect_stress/connect_stress.o 00:03:16.611 CXX test/cpp_headers/ioat.o 00:03:16.611 LINK reserve 00:03:16.611 CC examples/nvme/abort/abort.o 00:03:16.611 CC examples/thread/thread/thread_ex.o 00:03:16.611 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:16.611 LINK simple_copy 00:03:16.611 CC examples/idxd/perf/perf.o 00:03:16.611 CC test/nvme/boot_partition/boot_partition.o 00:03:16.611 LINK connect_stress 00:03:16.611 CXX test/cpp_headers/ioat_spec.o 00:03:16.611 CXX test/cpp_headers/iscsi_spec.o 00:03:16.611 LINK pmr_persistence 00:03:16.611 CC test/nvme/compliance/nvme_compliance.o 00:03:16.869 LINK boot_partition 00:03:16.869 CXX test/cpp_headers/json.o 00:03:16.869 LINK thread 00:03:16.869 CXX test/cpp_headers/jsonrpc.o 00:03:16.869 CXX test/cpp_headers/likely.o 00:03:16.869 CC test/nvme/fused_ordering/fused_ordering.o 00:03:16.869 LINK idxd_perf 00:03:16.869 LINK abort 00:03:16.869 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:16.869 CXX test/cpp_headers/log.o 00:03:16.869 CXX test/cpp_headers/lvol.o 00:03:16.869 CC test/nvme/fdp/fdp.o 00:03:16.869 CC test/nvme/cuse/cuse.o 00:03:16.869 LINK fused_ordering 00:03:16.869 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:17.127 CXX test/cpp_headers/memory.o 00:03:17.127 CXX test/cpp_headers/mmio.o 00:03:17.127 CXX test/cpp_headers/nbd.o 00:03:17.127 CXX test/cpp_headers/notify.o 00:03:17.127 LINK nvme_compliance 00:03:17.127 CXX test/cpp_headers/nvme.o 00:03:17.127 LINK doorbell_aers 00:03:17.127 CXX test/cpp_headers/nvme_intel.o 00:03:17.127 LINK interrupt_tgt 00:03:17.127 CXX test/cpp_headers/nvme_ocssd.o 00:03:17.127 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:17.128 LINK fdp 00:03:17.128 CXX test/cpp_headers/nvme_spec.o 00:03:17.128 CXX test/cpp_headers/nvme_zns.o 00:03:17.128 CXX test/cpp_headers/nvmf_cmd.o 00:03:17.128 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:17.128 CXX test/cpp_headers/nvmf.o 00:03:17.128 CXX test/cpp_headers/nvmf_spec.o 00:03:17.385 CXX test/cpp_headers/nvmf_transport.o 00:03:17.385 CXX test/cpp_headers/opal.o 00:03:17.385 CXX test/cpp_headers/opal_spec.o 00:03:17.385 CXX test/cpp_headers/pci_ids.o 00:03:17.385 CXX test/cpp_headers/pipe.o 00:03:17.385 CXX test/cpp_headers/queue.o 00:03:17.385 CXX test/cpp_headers/reduce.o 00:03:17.385 CXX test/cpp_headers/rpc.o 00:03:17.385 CXX test/cpp_headers/scheduler.o 00:03:17.385 CXX test/cpp_headers/scsi.o 00:03:17.385 CXX test/cpp_headers/scsi_spec.o 00:03:17.385 CXX test/cpp_headers/sock.o 00:03:17.385 CXX test/cpp_headers/stdinc.o 00:03:17.385 CXX test/cpp_headers/string.o 00:03:17.385 CXX test/cpp_headers/thread.o 00:03:17.385 CXX test/cpp_headers/trace.o 00:03:17.643 CXX test/cpp_headers/trace_parser.o 00:03:17.643 CXX test/cpp_headers/tree.o 00:03:17.643 CXX test/cpp_headers/ublk.o 00:03:17.643 CXX test/cpp_headers/util.o 00:03:17.643 CXX test/cpp_headers/uuid.o 00:03:17.643 CXX test/cpp_headers/version.o 00:03:17.643 CXX test/cpp_headers/vfio_user_spec.o 00:03:17.643 CXX test/cpp_headers/vfio_user_pci.o 00:03:17.643 CXX test/cpp_headers/vhost.o 00:03:17.643 CXX test/cpp_headers/vmd.o 00:03:17.643 LINK cuse 00:03:17.643 CXX test/cpp_headers/xor.o 00:03:17.643 CXX test/cpp_headers/zipf.o 00:03:19.025 LINK esnap 00:03:19.026 00:03:19.026 real 0m47.666s 00:03:19.026 user 4m43.705s 00:03:19.026 sys 1m1.365s 00:03:19.026 13:58:21 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:03:19.026 13:58:21 -- common/autotest_common.sh@10 -- $ set +x 00:03:19.026 ************************************ 00:03:19.026 END TEST make 00:03:19.026 ************************************ 00:03:19.286 13:58:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:19.286 13:58:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:19.286 13:58:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:19.286 13:58:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:19.286 13:58:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:19.286 13:58:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:19.286 13:58:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:19.286 13:58:22 -- scripts/common.sh@335 -- # IFS=.-: 00:03:19.286 13:58:22 -- scripts/common.sh@335 -- # read -ra ver1 00:03:19.286 13:58:22 -- scripts/common.sh@336 -- # IFS=.-: 00:03:19.286 13:58:22 -- scripts/common.sh@336 -- # read -ra ver2 00:03:19.286 13:58:22 -- scripts/common.sh@337 -- # local 'op=<' 00:03:19.286 13:58:22 -- scripts/common.sh@339 -- # ver1_l=2 00:03:19.286 13:58:22 -- scripts/common.sh@340 -- # ver2_l=1 00:03:19.286 13:58:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:19.286 13:58:22 -- scripts/common.sh@343 -- # case "$op" in 00:03:19.286 13:58:22 -- scripts/common.sh@344 -- # : 1 00:03:19.286 13:58:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:19.286 13:58:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:19.286 13:58:22 -- scripts/common.sh@364 -- # decimal 1 00:03:19.286 13:58:22 -- scripts/common.sh@352 -- # local d=1 00:03:19.286 13:58:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:19.286 13:58:22 -- scripts/common.sh@354 -- # echo 1 00:03:19.286 13:58:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:19.286 13:58:22 -- scripts/common.sh@365 -- # decimal 2 00:03:19.286 13:58:22 -- scripts/common.sh@352 -- # local d=2 00:03:19.286 13:58:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:19.286 13:58:22 -- scripts/common.sh@354 -- # echo 2 00:03:19.286 13:58:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:19.286 13:58:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:19.286 13:58:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:19.286 13:58:22 -- scripts/common.sh@367 -- # return 0 00:03:19.286 13:58:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:19.286 13:58:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:19.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:19.286 --rc genhtml_branch_coverage=1 00:03:19.286 --rc genhtml_function_coverage=1 00:03:19.286 --rc genhtml_legend=1 00:03:19.286 --rc geninfo_all_blocks=1 00:03:19.286 --rc geninfo_unexecuted_blocks=1 00:03:19.286 00:03:19.286 ' 00:03:19.286 13:58:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:19.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:19.286 --rc genhtml_branch_coverage=1 00:03:19.286 --rc genhtml_function_coverage=1 00:03:19.286 --rc genhtml_legend=1 00:03:19.286 --rc geninfo_all_blocks=1 00:03:19.286 --rc geninfo_unexecuted_blocks=1 00:03:19.286 00:03:19.286 ' 00:03:19.286 13:58:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:19.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:19.286 --rc genhtml_branch_coverage=1 00:03:19.286 --rc genhtml_function_coverage=1 00:03:19.286 --rc genhtml_legend=1 00:03:19.286 --rc geninfo_all_blocks=1 00:03:19.286 --rc geninfo_unexecuted_blocks=1 00:03:19.286 00:03:19.286 ' 00:03:19.286 13:58:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:19.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:19.286 --rc genhtml_branch_coverage=1 00:03:19.286 --rc genhtml_function_coverage=1 00:03:19.286 --rc genhtml_legend=1 00:03:19.286 --rc geninfo_all_blocks=1 00:03:19.286 --rc geninfo_unexecuted_blocks=1 00:03:19.286 00:03:19.286 ' 00:03:19.286 13:58:22 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:19.286 13:58:22 -- nvmf/common.sh@7 -- # uname -s 00:03:19.286 13:58:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:19.286 13:58:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:19.286 13:58:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:19.286 13:58:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:19.286 13:58:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:19.286 13:58:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:19.286 13:58:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:19.286 13:58:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:19.286 13:58:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:19.286 13:58:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:19.286 13:58:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:097a4e0a-03c7-4aa5-9446-1422b0e678be 00:03:19.286 13:58:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=097a4e0a-03c7-4aa5-9446-1422b0e678be 00:03:19.286 13:58:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:19.286 13:58:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:19.286 13:58:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:19.286 13:58:22 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:19.286 13:58:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:19.286 13:58:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:19.286 13:58:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:19.286 13:58:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:19.286 13:58:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:19.286 13:58:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:19.286 13:58:22 -- paths/export.sh@5 -- # export PATH 00:03:19.287 13:58:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:19.287 13:58:22 -- nvmf/common.sh@46 -- # : 0 00:03:19.287 13:58:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:19.287 13:58:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:19.287 13:58:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:19.287 13:58:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:19.287 13:58:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:19.287 13:58:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:19.287 13:58:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:19.287 13:58:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:19.287 13:58:22 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:19.287 13:58:22 -- spdk/autotest.sh@32 -- # uname -s 00:03:19.287 13:58:22 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:19.287 13:58:22 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:19.287 13:58:22 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:19.287 13:58:22 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:19.287 13:58:22 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:19.287 13:58:22 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:19.287 13:58:22 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:19.287 13:58:22 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:19.287 13:58:22 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:19.287 13:58:22 -- spdk/autotest.sh@48 -- # udevadm_pid=48165 00:03:19.287 13:58:22 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:03:19.287 13:58:22 -- spdk/autotest.sh@54 -- # echo 48168 00:03:19.287 13:58:22 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:19.287 13:58:22 -- spdk/autotest.sh@56 -- # echo 48171 00:03:19.287 13:58:22 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:19.287 13:58:22 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:03:19.287 13:58:22 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:19.287 13:58:22 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:03:19.287 13:58:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:19.287 13:58:22 -- common/autotest_common.sh@10 -- # set +x 00:03:19.287 13:58:22 -- spdk/autotest.sh@70 -- # create_test_list 00:03:19.287 13:58:22 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:19.287 13:58:22 -- common/autotest_common.sh@10 -- # set +x 00:03:19.548 13:58:22 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:19.548 13:58:22 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:19.548 13:58:22 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:03:19.548 13:58:22 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:19.548 13:58:22 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:03:19.548 13:58:22 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:03:19.548 13:58:22 -- common/autotest_common.sh@1450 -- # uname 00:03:19.548 13:58:22 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:03:19.548 13:58:22 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:03:19.548 13:58:22 -- common/autotest_common.sh@1470 -- # uname 00:03:19.548 13:58:22 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:03:19.548 13:58:22 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:03:19.548 13:58:22 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:19.548 lcov: LCOV version 1.15 00:03:19.548 13:58:22 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:26.111 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:03:26.111 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:03:26.111 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:03:26.111 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:26.111 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:03:26.111 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:03:48.095 13:58:47 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:03:48.095 13:58:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:48.095 13:58:47 -- common/autotest_common.sh@10 -- # set +x 00:03:48.095 13:58:47 -- spdk/autotest.sh@89 -- # rm -f 00:03:48.095 13:58:47 -- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:48.095 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:48.095 0000:00:09.0 (1b36 0010): Already using the nvme driver 00:03:48.095 0000:00:08.0 (1b36 0010): Already using the nvme driver 00:03:48.095 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:03:48.095 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:03:48.095 13:58:48 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:03:48.095 13:58:48 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:03:48.095 13:58:48 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:03:48.095 13:58:48 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:03:48.095 13:58:48 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:48.095 13:58:48 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:03:48.095 13:58:48 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:03:48.095 13:58:48 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:48.095 13:58:48 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:48.095 13:58:48 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:48.095 13:58:48 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:03:48.095 13:58:48 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:03:48.095 13:58:48 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:48.095 13:58:48 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:48.095 13:58:48 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:48.095 13:58:48 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:03:48.095 13:58:48 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:03:48.095 13:58:48 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:48.095 13:58:48 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:48.095 13:58:48 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:48.095 13:58:48 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n2 00:03:48.095 13:58:48 -- common/autotest_common.sh@1657 -- # local device=nvme2n2 00:03:48.095 13:58:48 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:03:48.095 13:58:48 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:48.095 13:58:48 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:48.095 13:58:48 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n3 00:03:48.095 13:58:48 -- common/autotest_common.sh@1657 -- # local device=nvme2n3 00:03:48.095 13:58:48 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:03:48.095 13:58:48 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:48.095 13:58:48 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:48.095 13:58:48 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3c3n1 00:03:48.095 13:58:48 -- common/autotest_common.sh@1657 -- # local device=nvme3c3n1 00:03:48.095 13:58:48 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:03:48.095 13:58:48 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:48.095 13:58:48 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:48.095 13:58:48 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:03:48.095 13:58:48 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:03:48.095 13:58:48 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:03:48.095 13:58:48 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:48.095 13:58:48 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:03:48.095 13:58:48 -- spdk/autotest.sh@108 -- # grep -v p 00:03:48.095 13:58:48 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme2n2 /dev/nvme2n3 /dev/nvme3n1 00:03:48.095 13:58:48 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:48.096 13:58:48 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:48.096 13:58:48 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:03:48.096 13:58:48 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:03:48.096 13:58:48 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:48.096 No valid GPT data, bailing 00:03:48.096 13:58:48 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:48.096 13:58:48 -- scripts/common.sh@393 -- # pt= 00:03:48.096 13:58:48 -- scripts/common.sh@394 -- # return 1 00:03:48.096 13:58:48 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:48.096 1+0 records in 00:03:48.096 1+0 records out 00:03:48.096 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0302632 s, 34.6 MB/s 00:03:48.096 13:58:48 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:48.096 13:58:48 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:48.096 13:58:48 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n1 00:03:48.096 13:58:48 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:03:48.096 13:58:48 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:48.096 No valid GPT data, bailing 00:03:48.096 13:58:48 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:48.096 13:58:48 -- scripts/common.sh@393 -- # pt= 00:03:48.096 13:58:48 -- scripts/common.sh@394 -- # return 1 00:03:48.096 13:58:48 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:48.096 1+0 records in 00:03:48.096 1+0 records out 00:03:48.096 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00418961 s, 250 MB/s 00:03:48.096 13:58:48 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:48.096 13:58:48 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:48.096 13:58:48 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme2n1 00:03:48.096 13:58:48 -- scripts/common.sh@380 -- # local block=/dev/nvme2n1 pt 00:03:48.096 13:58:48 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:03:48.096 No valid GPT data, bailing 00:03:48.096 13:58:49 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:03:48.096 13:58:49 -- scripts/common.sh@393 -- # pt= 00:03:48.096 13:58:49 -- scripts/common.sh@394 -- # return 1 00:03:48.096 13:58:49 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:03:48.096 1+0 records in 00:03:48.096 1+0 records out 00:03:48.096 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00532305 s, 197 MB/s 00:03:48.096 13:58:49 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:48.096 13:58:49 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:48.096 13:58:49 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme2n2 00:03:48.096 13:58:49 -- scripts/common.sh@380 -- # local block=/dev/nvme2n2 pt 00:03:48.096 13:58:49 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:03:48.096 No valid GPT data, bailing 00:03:48.096 13:58:49 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:03:48.096 13:58:49 -- scripts/common.sh@393 -- # pt= 00:03:48.096 13:58:49 -- scripts/common.sh@394 -- # return 1 00:03:48.096 13:58:49 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:03:48.096 1+0 records in 00:03:48.096 1+0 records out 00:03:48.096 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00529984 s, 198 MB/s 00:03:48.096 13:58:49 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:48.096 13:58:49 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:48.096 13:58:49 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme2n3 00:03:48.096 13:58:49 -- scripts/common.sh@380 -- # local block=/dev/nvme2n3 pt 00:03:48.096 13:58:49 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:03:48.096 No valid GPT data, bailing 00:03:48.096 13:58:49 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:03:48.096 13:58:49 -- scripts/common.sh@393 -- # pt= 00:03:48.096 13:58:49 -- scripts/common.sh@394 -- # return 1 00:03:48.096 13:58:49 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:03:48.096 1+0 records in 00:03:48.096 1+0 records out 00:03:48.096 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0056975 s, 184 MB/s 00:03:48.096 13:58:49 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:48.096 13:58:49 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:48.096 13:58:49 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme3n1 00:03:48.096 13:58:49 -- scripts/common.sh@380 -- # local block=/dev/nvme3n1 pt 00:03:48.096 13:58:49 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:03:48.096 No valid GPT data, bailing 00:03:48.096 13:58:49 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:03:48.096 13:58:49 -- scripts/common.sh@393 -- # pt= 00:03:48.096 13:58:49 -- scripts/common.sh@394 -- # return 1 00:03:48.096 13:58:49 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:03:48.096 1+0 records in 00:03:48.096 1+0 records out 00:03:48.096 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00604391 s, 173 MB/s 00:03:48.096 13:58:49 -- spdk/autotest.sh@116 -- # sync 00:03:48.096 13:58:49 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:48.096 13:58:49 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:48.096 13:58:49 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:48.372 13:58:51 -- spdk/autotest.sh@122 -- # uname -s 00:03:48.372 13:58:51 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:03:48.372 13:58:51 -- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:48.372 13:58:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:48.372 13:58:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:48.372 13:58:51 -- common/autotest_common.sh@10 -- # set +x 00:03:48.372 ************************************ 00:03:48.372 START TEST setup.sh 00:03:48.372 ************************************ 00:03:48.372 13:58:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:48.372 * Looking for test storage... 00:03:48.372 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:48.372 13:58:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:48.372 13:58:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:48.372 13:58:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:48.633 13:58:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:48.633 13:58:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:48.633 13:58:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:48.633 13:58:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:48.633 13:58:51 -- scripts/common.sh@335 -- # IFS=.-: 00:03:48.633 13:58:51 -- scripts/common.sh@335 -- # read -ra ver1 00:03:48.633 13:58:51 -- scripts/common.sh@336 -- # IFS=.-: 00:03:48.633 13:58:51 -- scripts/common.sh@336 -- # read -ra ver2 00:03:48.633 13:58:51 -- scripts/common.sh@337 -- # local 'op=<' 00:03:48.633 13:58:51 -- scripts/common.sh@339 -- # ver1_l=2 00:03:48.633 13:58:51 -- scripts/common.sh@340 -- # ver2_l=1 00:03:48.633 13:58:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:48.633 13:58:51 -- scripts/common.sh@343 -- # case "$op" in 00:03:48.633 13:58:51 -- scripts/common.sh@344 -- # : 1 00:03:48.633 13:58:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:48.633 13:58:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:48.633 13:58:51 -- scripts/common.sh@364 -- # decimal 1 00:03:48.633 13:58:51 -- scripts/common.sh@352 -- # local d=1 00:03:48.633 13:58:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:48.633 13:58:51 -- scripts/common.sh@354 -- # echo 1 00:03:48.633 13:58:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:48.633 13:58:51 -- scripts/common.sh@365 -- # decimal 2 00:03:48.633 13:58:51 -- scripts/common.sh@352 -- # local d=2 00:03:48.633 13:58:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:48.633 13:58:51 -- scripts/common.sh@354 -- # echo 2 00:03:48.633 13:58:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:48.633 13:58:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:48.633 13:58:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:48.633 13:58:51 -- scripts/common.sh@367 -- # return 0 00:03:48.633 13:58:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:48.633 13:58:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:48.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.633 --rc genhtml_branch_coverage=1 00:03:48.633 --rc genhtml_function_coverage=1 00:03:48.633 --rc genhtml_legend=1 00:03:48.633 --rc geninfo_all_blocks=1 00:03:48.633 --rc geninfo_unexecuted_blocks=1 00:03:48.633 00:03:48.633 ' 00:03:48.633 13:58:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:48.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.633 --rc genhtml_branch_coverage=1 00:03:48.633 --rc genhtml_function_coverage=1 00:03:48.633 --rc genhtml_legend=1 00:03:48.633 --rc geninfo_all_blocks=1 00:03:48.634 --rc geninfo_unexecuted_blocks=1 00:03:48.634 00:03:48.634 ' 00:03:48.634 13:58:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:48.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.634 --rc genhtml_branch_coverage=1 00:03:48.634 --rc genhtml_function_coverage=1 00:03:48.634 --rc genhtml_legend=1 00:03:48.634 --rc geninfo_all_blocks=1 00:03:48.634 --rc geninfo_unexecuted_blocks=1 00:03:48.634 00:03:48.634 ' 00:03:48.634 13:58:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:48.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.634 --rc genhtml_branch_coverage=1 00:03:48.634 --rc genhtml_function_coverage=1 00:03:48.634 --rc genhtml_legend=1 00:03:48.634 --rc geninfo_all_blocks=1 00:03:48.634 --rc geninfo_unexecuted_blocks=1 00:03:48.634 00:03:48.634 ' 00:03:48.634 13:58:51 -- setup/test-setup.sh@10 -- # uname -s 00:03:48.634 13:58:51 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:48.634 13:58:51 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:48.634 13:58:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:48.634 13:58:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:48.634 13:58:51 -- common/autotest_common.sh@10 -- # set +x 00:03:48.634 ************************************ 00:03:48.634 START TEST acl 00:03:48.634 ************************************ 00:03:48.634 13:58:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:48.634 * Looking for test storage... 00:03:48.634 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:48.634 13:58:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:48.634 13:58:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:48.634 13:58:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:48.634 13:58:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:48.634 13:58:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:48.634 13:58:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:48.634 13:58:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:48.634 13:58:51 -- scripts/common.sh@335 -- # IFS=.-: 00:03:48.634 13:58:51 -- scripts/common.sh@335 -- # read -ra ver1 00:03:48.634 13:58:51 -- scripts/common.sh@336 -- # IFS=.-: 00:03:48.634 13:58:51 -- scripts/common.sh@336 -- # read -ra ver2 00:03:48.634 13:58:51 -- scripts/common.sh@337 -- # local 'op=<' 00:03:48.634 13:58:51 -- scripts/common.sh@339 -- # ver1_l=2 00:03:48.634 13:58:51 -- scripts/common.sh@340 -- # ver2_l=1 00:03:48.634 13:58:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:48.634 13:58:51 -- scripts/common.sh@343 -- # case "$op" in 00:03:48.634 13:58:51 -- scripts/common.sh@344 -- # : 1 00:03:48.634 13:58:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:48.634 13:58:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:48.634 13:58:51 -- scripts/common.sh@364 -- # decimal 1 00:03:48.634 13:58:51 -- scripts/common.sh@352 -- # local d=1 00:03:48.634 13:58:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:48.634 13:58:51 -- scripts/common.sh@354 -- # echo 1 00:03:48.634 13:58:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:48.634 13:58:51 -- scripts/common.sh@365 -- # decimal 2 00:03:48.634 13:58:51 -- scripts/common.sh@352 -- # local d=2 00:03:48.634 13:58:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:48.634 13:58:51 -- scripts/common.sh@354 -- # echo 2 00:03:48.634 13:58:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:48.634 13:58:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:48.634 13:58:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:48.634 13:58:51 -- scripts/common.sh@367 -- # return 0 00:03:48.634 13:58:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:48.634 13:58:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:48.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.634 --rc genhtml_branch_coverage=1 00:03:48.634 --rc genhtml_function_coverage=1 00:03:48.634 --rc genhtml_legend=1 00:03:48.634 --rc geninfo_all_blocks=1 00:03:48.634 --rc geninfo_unexecuted_blocks=1 00:03:48.634 00:03:48.634 ' 00:03:48.634 13:58:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:48.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.634 --rc genhtml_branch_coverage=1 00:03:48.634 --rc genhtml_function_coverage=1 00:03:48.634 --rc genhtml_legend=1 00:03:48.634 --rc geninfo_all_blocks=1 00:03:48.634 --rc geninfo_unexecuted_blocks=1 00:03:48.634 00:03:48.634 ' 00:03:48.634 13:58:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:48.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.634 --rc genhtml_branch_coverage=1 00:03:48.634 --rc genhtml_function_coverage=1 00:03:48.634 --rc genhtml_legend=1 00:03:48.634 --rc geninfo_all_blocks=1 00:03:48.634 --rc geninfo_unexecuted_blocks=1 00:03:48.634 00:03:48.634 ' 00:03:48.634 13:58:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:48.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.634 --rc genhtml_branch_coverage=1 00:03:48.634 --rc genhtml_function_coverage=1 00:03:48.634 --rc genhtml_legend=1 00:03:48.634 --rc geninfo_all_blocks=1 00:03:48.634 --rc geninfo_unexecuted_blocks=1 00:03:48.634 00:03:48.634 ' 00:03:48.634 13:58:51 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:48.634 13:58:51 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:03:48.634 13:58:51 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:03:48.634 13:58:51 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:03:48.634 13:58:51 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:48.634 13:58:51 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:03:48.634 13:58:51 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:03:48.634 13:58:51 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:48.634 13:58:51 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:48.634 13:58:51 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:48.634 13:58:51 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:03:48.634 13:58:51 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:03:48.634 13:58:51 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:48.634 13:58:51 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:48.634 13:58:51 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:48.634 13:58:51 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:03:48.634 13:58:51 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:03:48.634 13:58:51 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:48.634 13:58:51 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:48.634 13:58:51 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:48.634 13:58:51 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n2 00:03:48.634 13:58:51 -- common/autotest_common.sh@1657 -- # local device=nvme2n2 00:03:48.634 13:58:51 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:03:48.634 13:58:51 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:48.634 13:58:51 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:48.634 13:58:51 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n3 00:03:48.634 13:58:51 -- common/autotest_common.sh@1657 -- # local device=nvme2n3 00:03:48.634 13:58:51 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:03:48.634 13:58:51 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:48.634 13:58:51 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:48.634 13:58:51 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3c3n1 00:03:48.634 13:58:51 -- common/autotest_common.sh@1657 -- # local device=nvme3c3n1 00:03:48.634 13:58:51 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:03:48.634 13:58:51 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:48.634 13:58:51 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:48.634 13:58:51 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:03:48.634 13:58:51 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:03:48.634 13:58:51 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:03:48.634 13:58:51 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:48.634 13:58:51 -- setup/acl.sh@12 -- # devs=() 00:03:48.634 13:58:51 -- setup/acl.sh@12 -- # declare -a devs 00:03:48.634 13:58:51 -- setup/acl.sh@13 -- # drivers=() 00:03:48.634 13:58:51 -- setup/acl.sh@13 -- # declare -A drivers 00:03:48.634 13:58:51 -- setup/acl.sh@51 -- # setup reset 00:03:48.634 13:58:51 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:48.635 13:58:51 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:50.023 13:58:52 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:50.023 13:58:52 -- setup/acl.sh@16 -- # local dev driver 00:03:50.023 13:58:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.023 13:58:52 -- setup/acl.sh@15 -- # setup output status 00:03:50.023 13:58:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:50.023 13:58:52 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:50.023 Hugepages 00:03:50.023 node hugesize free / total 00:03:50.023 13:58:52 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:50.023 13:58:52 -- setup/acl.sh@19 -- # continue 00:03:50.023 13:58:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.023 00:03:50.023 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:50.023 13:58:52 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:50.023 13:58:52 -- setup/acl.sh@19 -- # continue 00:03:50.023 13:58:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.023 13:58:52 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:03:50.023 13:58:52 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:03:50.023 13:58:52 -- setup/acl.sh@20 -- # continue 00:03:50.023 13:58:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.023 13:58:52 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:03:50.023 13:58:52 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:50.023 13:58:52 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:03:50.023 13:58:52 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:50.023 13:58:52 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:50.023 13:58:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.023 13:58:52 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:03:50.023 13:58:52 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:50.023 13:58:52 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:03:50.023 13:58:52 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:50.023 13:58:52 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:50.023 13:58:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.285 13:58:52 -- setup/acl.sh@19 -- # [[ 0000:00:08.0 == *:*:*.* ]] 00:03:50.285 13:58:52 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:50.285 13:58:52 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:03:50.285 13:58:52 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:50.285 13:58:52 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:50.285 13:58:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.285 13:58:53 -- setup/acl.sh@19 -- # [[ 0000:00:09.0 == *:*:*.* ]] 00:03:50.285 13:58:53 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:50.285 13:58:53 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\9\.\0* ]] 00:03:50.285 13:58:53 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:50.285 13:58:53 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:50.285 13:58:53 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.285 13:58:53 -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:03:50.285 13:58:53 -- setup/acl.sh@54 -- # run_test denied denied 00:03:50.285 13:58:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:50.285 13:58:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:50.285 13:58:53 -- common/autotest_common.sh@10 -- # set +x 00:03:50.285 ************************************ 00:03:50.285 START TEST denied 00:03:50.285 ************************************ 00:03:50.285 13:58:53 -- common/autotest_common.sh@1114 -- # denied 00:03:50.285 13:58:53 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:03:50.285 13:58:53 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:03:50.285 13:58:53 -- setup/acl.sh@38 -- # setup output config 00:03:50.285 13:58:53 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:50.285 13:58:53 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:51.716 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:03:51.716 13:58:54 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:03:51.716 13:58:54 -- setup/acl.sh@28 -- # local dev driver 00:03:51.716 13:58:54 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:51.716 13:58:54 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:03:51.716 13:58:54 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:03:51.716 13:58:54 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:51.716 13:58:54 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:51.716 13:58:54 -- setup/acl.sh@41 -- # setup reset 00:03:51.716 13:58:54 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:51.716 13:58:54 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:58.311 00:03:58.311 real 0m7.028s 00:03:58.311 user 0m0.699s 00:03:58.311 sys 0m1.148s 00:03:58.311 13:59:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:58.311 13:59:00 -- common/autotest_common.sh@10 -- # set +x 00:03:58.311 ************************************ 00:03:58.311 END TEST denied 00:03:58.311 ************************************ 00:03:58.311 13:59:00 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:58.311 13:59:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:58.311 13:59:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:58.311 13:59:00 -- common/autotest_common.sh@10 -- # set +x 00:03:58.311 ************************************ 00:03:58.311 START TEST allowed 00:03:58.311 ************************************ 00:03:58.311 13:59:00 -- common/autotest_common.sh@1114 -- # allowed 00:03:58.311 13:59:00 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:03:58.311 13:59:00 -- setup/acl.sh@45 -- # setup output config 00:03:58.311 13:59:00 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:03:58.311 13:59:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.311 13:59:00 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:58.311 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:03:58.311 13:59:01 -- setup/acl.sh@47 -- # verify 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:03:58.311 13:59:01 -- setup/acl.sh@28 -- # local dev driver 00:03:58.311 13:59:01 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:58.311 13:59:01 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:03:58.311 13:59:01 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:03:58.311 13:59:01 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:58.311 13:59:01 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:58.311 13:59:01 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:58.311 13:59:01 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:08.0 ]] 00:03:58.311 13:59:01 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:08.0/driver 00:03:58.311 13:59:01 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:58.311 13:59:01 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:58.311 13:59:01 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:58.311 13:59:01 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:09.0 ]] 00:03:58.311 13:59:01 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:09.0/driver 00:03:58.311 13:59:01 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:58.311 13:59:01 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:58.311 13:59:01 -- setup/acl.sh@48 -- # setup reset 00:03:58.311 13:59:01 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:58.311 13:59:01 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:59.255 00:03:59.255 real 0m1.973s 00:03:59.255 user 0m0.798s 00:03:59.255 sys 0m0.950s 00:03:59.255 ************************************ 00:03:59.255 END TEST allowed 00:03:59.255 ************************************ 00:03:59.255 13:59:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:59.255 13:59:02 -- common/autotest_common.sh@10 -- # set +x 00:03:59.255 ************************************ 00:03:59.255 END TEST acl 00:03:59.255 ************************************ 00:03:59.255 00:03:59.255 real 0m10.793s 00:03:59.255 user 0m2.213s 00:03:59.255 sys 0m3.034s 00:03:59.255 13:59:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:59.255 13:59:02 -- common/autotest_common.sh@10 -- # set +x 00:03:59.518 13:59:02 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:59.518 13:59:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:59.518 13:59:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:59.518 13:59:02 -- common/autotest_common.sh@10 -- # set +x 00:03:59.518 ************************************ 00:03:59.518 START TEST hugepages 00:03:59.518 ************************************ 00:03:59.518 13:59:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:59.518 * Looking for test storage... 00:03:59.518 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:59.518 13:59:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:59.518 13:59:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:59.518 13:59:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:59.518 13:59:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:59.518 13:59:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:59.518 13:59:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:59.518 13:59:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:59.518 13:59:02 -- scripts/common.sh@335 -- # IFS=.-: 00:03:59.518 13:59:02 -- scripts/common.sh@335 -- # read -ra ver1 00:03:59.518 13:59:02 -- scripts/common.sh@336 -- # IFS=.-: 00:03:59.518 13:59:02 -- scripts/common.sh@336 -- # read -ra ver2 00:03:59.518 13:59:02 -- scripts/common.sh@337 -- # local 'op=<' 00:03:59.518 13:59:02 -- scripts/common.sh@339 -- # ver1_l=2 00:03:59.518 13:59:02 -- scripts/common.sh@340 -- # ver2_l=1 00:03:59.518 13:59:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:59.518 13:59:02 -- scripts/common.sh@343 -- # case "$op" in 00:03:59.518 13:59:02 -- scripts/common.sh@344 -- # : 1 00:03:59.518 13:59:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:59.518 13:59:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:59.518 13:59:02 -- scripts/common.sh@364 -- # decimal 1 00:03:59.518 13:59:02 -- scripts/common.sh@352 -- # local d=1 00:03:59.518 13:59:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:59.518 13:59:02 -- scripts/common.sh@354 -- # echo 1 00:03:59.518 13:59:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:59.518 13:59:02 -- scripts/common.sh@365 -- # decimal 2 00:03:59.518 13:59:02 -- scripts/common.sh@352 -- # local d=2 00:03:59.518 13:59:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:59.518 13:59:02 -- scripts/common.sh@354 -- # echo 2 00:03:59.518 13:59:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:59.518 13:59:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:59.518 13:59:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:59.518 13:59:02 -- scripts/common.sh@367 -- # return 0 00:03:59.518 13:59:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:59.518 13:59:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:59.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.518 --rc genhtml_branch_coverage=1 00:03:59.518 --rc genhtml_function_coverage=1 00:03:59.518 --rc genhtml_legend=1 00:03:59.518 --rc geninfo_all_blocks=1 00:03:59.518 --rc geninfo_unexecuted_blocks=1 00:03:59.518 00:03:59.518 ' 00:03:59.518 13:59:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:59.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.518 --rc genhtml_branch_coverage=1 00:03:59.518 --rc genhtml_function_coverage=1 00:03:59.518 --rc genhtml_legend=1 00:03:59.518 --rc geninfo_all_blocks=1 00:03:59.518 --rc geninfo_unexecuted_blocks=1 00:03:59.518 00:03:59.518 ' 00:03:59.518 13:59:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:59.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.518 --rc genhtml_branch_coverage=1 00:03:59.518 --rc genhtml_function_coverage=1 00:03:59.518 --rc genhtml_legend=1 00:03:59.518 --rc geninfo_all_blocks=1 00:03:59.518 --rc geninfo_unexecuted_blocks=1 00:03:59.518 00:03:59.518 ' 00:03:59.518 13:59:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:59.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.518 --rc genhtml_branch_coverage=1 00:03:59.518 --rc genhtml_function_coverage=1 00:03:59.518 --rc genhtml_legend=1 00:03:59.518 --rc geninfo_all_blocks=1 00:03:59.518 --rc geninfo_unexecuted_blocks=1 00:03:59.518 00:03:59.518 ' 00:03:59.518 13:59:02 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:59.518 13:59:02 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:59.518 13:59:02 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:59.518 13:59:02 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:59.518 13:59:02 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:59.518 13:59:02 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:59.518 13:59:02 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:59.518 13:59:02 -- setup/common.sh@18 -- # local node= 00:03:59.518 13:59:02 -- setup/common.sh@19 -- # local var val 00:03:59.518 13:59:02 -- setup/common.sh@20 -- # local mem_f mem 00:03:59.518 13:59:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.518 13:59:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.518 13:59:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.518 13:59:02 -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.518 13:59:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.519 13:59:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 5811776 kB' 'MemAvailable: 7368588 kB' 'Buffers: 3444 kB' 'Cached: 1768904 kB' 'SwapCached: 0 kB' 'Active: 465540 kB' 'Inactive: 1422592 kB' 'Active(anon): 126316 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1422592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 308 kB' 'Writeback: 0 kB' 'AnonPages: 117580 kB' 'Mapped: 50700 kB' 'Shmem: 10532 kB' 'KReclaimable: 63844 kB' 'Slab: 161724 kB' 'SReclaimable: 63844 kB' 'SUnreclaim: 97880 kB' 'KernelStack: 6416 kB' 'PageTables: 4128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12410000 kB' 'Committed_AS: 301168 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 218988 kB' 'DirectMap2M: 6072320 kB' 'DirectMap1G: 8388608 kB' 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # continue 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # continue 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # continue 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # continue 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # continue 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # continue 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # continue 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # continue 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # continue 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # continue 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # continue 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # continue 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # continue 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # continue 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # continue 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # continue 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # continue 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # continue 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # continue 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # continue 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # continue 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # continue 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # continue 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # continue 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # continue 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # continue 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # continue 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # continue 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # continue 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # continue 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # continue 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # continue 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # continue 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # continue 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # continue 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.519 13:59:02 -- setup/common.sh@32 -- # continue 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.519 13:59:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.520 13:59:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.520 13:59:02 -- setup/common.sh@32 -- # continue 00:03:59.520 13:59:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.520 13:59:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.520 13:59:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.520 13:59:02 -- setup/common.sh@32 -- # continue 00:03:59.520 13:59:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.520 13:59:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.520 13:59:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.520 13:59:02 -- setup/common.sh@32 -- # continue 00:03:59.520 13:59:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.520 13:59:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.520 13:59:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.520 13:59:02 -- setup/common.sh@32 -- # continue 00:03:59.520 13:59:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.520 13:59:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.520 13:59:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.520 13:59:02 -- setup/common.sh@32 -- # continue 00:03:59.520 13:59:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.520 13:59:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.520 13:59:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.520 13:59:02 -- setup/common.sh@32 -- # continue 00:03:59.520 13:59:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.520 13:59:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.520 13:59:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.520 13:59:02 -- setup/common.sh@32 -- # continue 00:03:59.520 13:59:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.520 13:59:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.520 13:59:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.520 13:59:02 -- setup/common.sh@32 -- # continue 00:03:59.520 13:59:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.520 13:59:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.520 13:59:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.520 13:59:02 -- setup/common.sh@32 -- # continue 00:03:59.520 13:59:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.520 13:59:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.520 13:59:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.520 13:59:02 -- setup/common.sh@32 -- # continue 00:03:59.520 13:59:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.520 13:59:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.520 13:59:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.520 13:59:02 -- setup/common.sh@32 -- # continue 00:03:59.520 13:59:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.520 13:59:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.520 13:59:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.520 13:59:02 -- setup/common.sh@32 -- # continue 00:03:59.520 13:59:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.520 13:59:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.520 13:59:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.520 13:59:02 -- setup/common.sh@32 -- # continue 00:03:59.520 13:59:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.520 13:59:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.520 13:59:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.520 13:59:02 -- setup/common.sh@32 -- # continue 00:03:59.520 13:59:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.520 13:59:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.520 13:59:02 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.520 13:59:02 -- setup/common.sh@32 -- # continue 00:03:59.520 13:59:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.520 13:59:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.520 13:59:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.520 13:59:02 -- setup/common.sh@32 -- # continue 00:03:59.520 13:59:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.520 13:59:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.520 13:59:02 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.520 13:59:02 -- setup/common.sh@33 -- # echo 2048 00:03:59.520 13:59:02 -- setup/common.sh@33 -- # return 0 00:03:59.520 13:59:02 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:59.520 13:59:02 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:59.520 13:59:02 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:59.520 13:59:02 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:59.520 13:59:02 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:59.520 13:59:02 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:59.520 13:59:02 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:59.520 13:59:02 -- setup/hugepages.sh@207 -- # get_nodes 00:03:59.520 13:59:02 -- setup/hugepages.sh@27 -- # local node 00:03:59.520 13:59:02 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.520 13:59:02 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:59.520 13:59:02 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:59.520 13:59:02 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:59.520 13:59:02 -- setup/hugepages.sh@208 -- # clear_hp 00:03:59.520 13:59:02 -- setup/hugepages.sh@37 -- # local node hp 00:03:59.520 13:59:02 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:59.520 13:59:02 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:59.520 13:59:02 -- setup/hugepages.sh@41 -- # echo 0 00:03:59.520 13:59:02 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:59.520 13:59:02 -- setup/hugepages.sh@41 -- # echo 0 00:03:59.520 13:59:02 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:59.520 13:59:02 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:59.520 13:59:02 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:59.520 13:59:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:59.520 13:59:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:59.520 13:59:02 -- common/autotest_common.sh@10 -- # set +x 00:03:59.520 ************************************ 00:03:59.520 START TEST default_setup 00:03:59.520 ************************************ 00:03:59.520 13:59:02 -- common/autotest_common.sh@1114 -- # default_setup 00:03:59.520 13:59:02 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:59.520 13:59:02 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:59.520 13:59:02 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:59.520 13:59:02 -- setup/hugepages.sh@51 -- # shift 00:03:59.520 13:59:02 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:59.520 13:59:02 -- setup/hugepages.sh@52 -- # local node_ids 00:03:59.520 13:59:02 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:59.520 13:59:02 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:59.520 13:59:02 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:59.520 13:59:02 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:59.520 13:59:02 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:59.520 13:59:02 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:59.520 13:59:02 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:59.520 13:59:02 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:59.520 13:59:02 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:59.520 13:59:02 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:59.520 13:59:02 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:59.520 13:59:02 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:59.520 13:59:02 -- setup/hugepages.sh@73 -- # return 0 00:03:59.520 13:59:02 -- setup/hugepages.sh@137 -- # setup output 00:03:59.520 13:59:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:59.520 13:59:02 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:00.467 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:00.468 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:00.468 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:00.732 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:04:00.732 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:04:00.732 13:59:03 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:00.732 13:59:03 -- setup/hugepages.sh@89 -- # local node 00:04:00.732 13:59:03 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:00.732 13:59:03 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:00.732 13:59:03 -- setup/hugepages.sh@92 -- # local surp 00:04:00.732 13:59:03 -- setup/hugepages.sh@93 -- # local resv 00:04:00.732 13:59:03 -- setup/hugepages.sh@94 -- # local anon 00:04:00.732 13:59:03 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:00.732 13:59:03 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:00.732 13:59:03 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:00.732 13:59:03 -- setup/common.sh@18 -- # local node= 00:04:00.732 13:59:03 -- setup/common.sh@19 -- # local var val 00:04:00.732 13:59:03 -- setup/common.sh@20 -- # local mem_f mem 00:04:00.732 13:59:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.732 13:59:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.732 13:59:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.732 13:59:03 -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.732 13:59:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.732 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.732 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.733 13:59:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7915064 kB' 'MemAvailable: 9471636 kB' 'Buffers: 3444 kB' 'Cached: 1768892 kB' 'SwapCached: 0 kB' 'Active: 467396 kB' 'Inactive: 1422612 kB' 'Active(anon): 128172 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1422612 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 344 kB' 'Writeback: 0 kB' 'AnonPages: 118988 kB' 'Mapped: 50868 kB' 'Shmem: 10492 kB' 'KReclaimable: 63324 kB' 'Slab: 161396 kB' 'SReclaimable: 63324 kB' 'SUnreclaim: 98072 kB' 'KernelStack: 6384 kB' 'PageTables: 4048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 306016 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55448 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 218988 kB' 'DirectMap2M: 6072320 kB' 'DirectMap1G: 8388608 kB' 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.733 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.733 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.734 13:59:03 -- setup/common.sh@33 -- # echo 0 00:04:00.734 13:59:03 -- setup/common.sh@33 -- # return 0 00:04:00.734 13:59:03 -- setup/hugepages.sh@97 -- # anon=0 00:04:00.734 13:59:03 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:00.734 13:59:03 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.734 13:59:03 -- setup/common.sh@18 -- # local node= 00:04:00.734 13:59:03 -- setup/common.sh@19 -- # local var val 00:04:00.734 13:59:03 -- setup/common.sh@20 -- # local mem_f mem 00:04:00.734 13:59:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.734 13:59:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.734 13:59:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.734 13:59:03 -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.734 13:59:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.734 13:59:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7915064 kB' 'MemAvailable: 9471636 kB' 'Buffers: 3444 kB' 'Cached: 1768892 kB' 'SwapCached: 0 kB' 'Active: 467032 kB' 'Inactive: 1422612 kB' 'Active(anon): 127808 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1422612 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 344 kB' 'Writeback: 0 kB' 'AnonPages: 118924 kB' 'Mapped: 50816 kB' 'Shmem: 10492 kB' 'KReclaimable: 63324 kB' 'Slab: 161408 kB' 'SReclaimable: 63324 kB' 'SUnreclaim: 98084 kB' 'KernelStack: 6368 kB' 'PageTables: 3992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 306016 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 218988 kB' 'DirectMap2M: 6072320 kB' 'DirectMap1G: 8388608 kB' 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.734 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.734 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.735 13:59:03 -- setup/common.sh@33 -- # echo 0 00:04:00.735 13:59:03 -- setup/common.sh@33 -- # return 0 00:04:00.735 13:59:03 -- setup/hugepages.sh@99 -- # surp=0 00:04:00.735 13:59:03 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:00.735 13:59:03 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:00.735 13:59:03 -- setup/common.sh@18 -- # local node= 00:04:00.735 13:59:03 -- setup/common.sh@19 -- # local var val 00:04:00.735 13:59:03 -- setup/common.sh@20 -- # local mem_f mem 00:04:00.735 13:59:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.735 13:59:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.735 13:59:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.735 13:59:03 -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.735 13:59:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.735 13:59:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7915064 kB' 'MemAvailable: 9471636 kB' 'Buffers: 3444 kB' 'Cached: 1768892 kB' 'SwapCached: 0 kB' 'Active: 467296 kB' 'Inactive: 1422612 kB' 'Active(anon): 128072 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1422612 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 344 kB' 'Writeback: 0 kB' 'AnonPages: 118944 kB' 'Mapped: 50688 kB' 'Shmem: 10492 kB' 'KReclaimable: 63324 kB' 'Slab: 161412 kB' 'SReclaimable: 63324 kB' 'SUnreclaim: 98088 kB' 'KernelStack: 6352 kB' 'PageTables: 3972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 306016 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 218988 kB' 'DirectMap2M: 6072320 kB' 'DirectMap1G: 8388608 kB' 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.735 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.735 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.736 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.736 13:59:03 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.736 13:59:03 -- setup/common.sh@33 -- # echo 0 00:04:00.736 13:59:03 -- setup/common.sh@33 -- # return 0 00:04:00.736 nr_hugepages=1024 00:04:00.736 resv_hugepages=0 00:04:00.736 surplus_hugepages=0 00:04:00.736 anon_hugepages=0 00:04:00.736 13:59:03 -- setup/hugepages.sh@100 -- # resv=0 00:04:00.736 13:59:03 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:00.736 13:59:03 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:00.736 13:59:03 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:00.737 13:59:03 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:00.737 13:59:03 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:00.737 13:59:03 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:00.737 13:59:03 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:00.737 13:59:03 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:00.737 13:59:03 -- setup/common.sh@18 -- # local node= 00:04:00.737 13:59:03 -- setup/common.sh@19 -- # local var val 00:04:00.737 13:59:03 -- setup/common.sh@20 -- # local mem_f mem 00:04:00.737 13:59:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.737 13:59:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.737 13:59:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.737 13:59:03 -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.737 13:59:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.737 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.737 13:59:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7914764 kB' 'MemAvailable: 9471336 kB' 'Buffers: 3444 kB' 'Cached: 1768892 kB' 'SwapCached: 0 kB' 'Active: 467232 kB' 'Inactive: 1422612 kB' 'Active(anon): 128008 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1422612 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 344 kB' 'Writeback: 0 kB' 'AnonPages: 118844 kB' 'Mapped: 50688 kB' 'Shmem: 10492 kB' 'KReclaimable: 63324 kB' 'Slab: 161408 kB' 'SReclaimable: 63324 kB' 'SUnreclaim: 98084 kB' 'KernelStack: 6320 kB' 'PageTables: 3868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 306016 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55416 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 218988 kB' 'DirectMap2M: 6072320 kB' 'DirectMap1G: 8388608 kB' 00:04:00.737 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.737 13:59:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.737 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.737 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.737 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.737 13:59:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.737 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.737 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.737 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.737 13:59:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.737 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.737 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.737 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.737 13:59:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.737 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.737 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.737 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.737 13:59:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.737 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.737 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.737 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.737 13:59:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.737 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.737 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.737 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.737 13:59:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.737 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.737 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.737 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.737 13:59:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.737 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.737 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.737 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.737 13:59:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.737 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.737 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.737 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.737 13:59:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.737 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.737 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.737 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.737 13:59:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.737 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.737 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.737 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.737 13:59:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.737 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.737 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.737 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.737 13:59:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.737 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.737 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.737 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.737 13:59:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.737 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.737 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.737 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.737 13:59:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.737 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.737 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.737 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.737 13:59:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.737 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.737 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.737 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.737 13:59:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.737 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.737 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.737 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.737 13:59:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.737 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.737 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.737 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.737 13:59:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.737 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.737 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.737 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.737 13:59:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.737 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.737 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.737 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.737 13:59:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.737 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.737 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.737 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.737 13:59:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.737 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.737 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.737 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.737 13:59:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.737 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.737 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.737 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.737 13:59:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.737 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.737 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.737 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.737 13:59:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.737 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.737 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.737 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.737 13:59:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.737 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.737 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.738 13:59:03 -- setup/common.sh@33 -- # echo 1024 00:04:00.738 13:59:03 -- setup/common.sh@33 -- # return 0 00:04:00.738 13:59:03 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:00.738 13:59:03 -- setup/hugepages.sh@112 -- # get_nodes 00:04:00.738 13:59:03 -- setup/hugepages.sh@27 -- # local node 00:04:00.738 13:59:03 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:00.738 13:59:03 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:00.738 13:59:03 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:00.738 13:59:03 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:00.738 13:59:03 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:00.738 13:59:03 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:00.738 13:59:03 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:00.738 13:59:03 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.738 13:59:03 -- setup/common.sh@18 -- # local node=0 00:04:00.738 13:59:03 -- setup/common.sh@19 -- # local var val 00:04:00.738 13:59:03 -- setup/common.sh@20 -- # local mem_f mem 00:04:00.738 13:59:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.738 13:59:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:00.738 13:59:03 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:00.738 13:59:03 -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.738 13:59:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.738 13:59:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7914764 kB' 'MemUsed: 4322332 kB' 'SwapCached: 0 kB' 'Active: 467104 kB' 'Inactive: 1422612 kB' 'Active(anon): 127880 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1422612 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 344 kB' 'Writeback: 0 kB' 'FilePages: 1772336 kB' 'Mapped: 50688 kB' 'AnonPages: 118700 kB' 'Shmem: 10492 kB' 'KernelStack: 6356 kB' 'PageTables: 3764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63324 kB' 'Slab: 161408 kB' 'SReclaimable: 63324 kB' 'SUnreclaim: 98084 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.738 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.738 13:59:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.739 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.739 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.739 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.739 13:59:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.739 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.739 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.739 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.739 13:59:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.739 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.739 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.739 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.739 13:59:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.739 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.739 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.739 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.739 13:59:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.739 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.739 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.739 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.739 13:59:03 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.739 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.739 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.739 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.739 13:59:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.739 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.739 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.739 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.739 13:59:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.739 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.739 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.739 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.739 13:59:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.739 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.739 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.739 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.739 13:59:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.739 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.739 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.739 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.739 13:59:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.739 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.739 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.739 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.739 13:59:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.739 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.739 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.739 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.739 13:59:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.739 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.739 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.739 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.739 13:59:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.739 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.739 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.739 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.739 13:59:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.739 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.739 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.739 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.739 13:59:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.739 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.739 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.739 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.739 13:59:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.739 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.739 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.739 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.739 13:59:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.739 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.739 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.739 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.739 13:59:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.739 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.739 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.739 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.739 13:59:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.739 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.739 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.739 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.739 13:59:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.739 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.739 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.739 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.739 13:59:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.739 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.739 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.739 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.739 13:59:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.739 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.739 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.739 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.739 13:59:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.739 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.739 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.739 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.739 13:59:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.739 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.739 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.739 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.739 13:59:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.739 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.739 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.739 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.739 13:59:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.739 13:59:03 -- setup/common.sh@32 -- # continue 00:04:00.739 13:59:03 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.739 13:59:03 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.739 13:59:03 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.739 13:59:03 -- setup/common.sh@33 -- # echo 0 00:04:00.739 13:59:03 -- setup/common.sh@33 -- # return 0 00:04:00.739 13:59:03 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:00.739 13:59:03 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:00.739 13:59:03 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:00.739 13:59:03 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:00.739 node0=1024 expecting 1024 00:04:00.739 13:59:03 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:00.739 13:59:03 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:00.739 00:04:00.739 real 0m1.184s 00:04:00.739 user 0m0.465s 00:04:00.739 sys 0m0.574s 00:04:00.739 13:59:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:00.739 13:59:03 -- common/autotest_common.sh@10 -- # set +x 00:04:00.739 ************************************ 00:04:00.739 END TEST default_setup 00:04:00.739 ************************************ 00:04:00.739 13:59:03 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:00.739 13:59:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:00.739 13:59:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:00.739 13:59:03 -- common/autotest_common.sh@10 -- # set +x 00:04:00.739 ************************************ 00:04:00.739 START TEST per_node_1G_alloc 00:04:00.739 ************************************ 00:04:00.739 13:59:03 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:04:00.739 13:59:03 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:00.739 13:59:03 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:00.739 13:59:03 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:00.739 13:59:03 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:00.739 13:59:03 -- setup/hugepages.sh@51 -- # shift 00:04:00.739 13:59:03 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:00.739 13:59:03 -- setup/hugepages.sh@52 -- # local node_ids 00:04:00.739 13:59:03 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:00.739 13:59:03 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:00.739 13:59:03 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:00.739 13:59:03 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:00.739 13:59:03 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:00.739 13:59:03 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:00.739 13:59:03 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:00.739 13:59:03 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:00.739 13:59:03 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:00.739 13:59:03 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:00.739 13:59:03 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:00.739 13:59:03 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:00.739 13:59:03 -- setup/hugepages.sh@73 -- # return 0 00:04:00.739 13:59:03 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:00.739 13:59:03 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:00.739 13:59:03 -- setup/hugepages.sh@146 -- # setup output 00:04:00.739 13:59:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.739 13:59:03 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:01.313 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:01.313 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:01.313 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:01.313 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:01.313 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:01.313 13:59:04 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:01.313 13:59:04 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:01.313 13:59:04 -- setup/hugepages.sh@89 -- # local node 00:04:01.313 13:59:04 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:01.313 13:59:04 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:01.313 13:59:04 -- setup/hugepages.sh@92 -- # local surp 00:04:01.313 13:59:04 -- setup/hugepages.sh@93 -- # local resv 00:04:01.313 13:59:04 -- setup/hugepages.sh@94 -- # local anon 00:04:01.313 13:59:04 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:01.313 13:59:04 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:01.313 13:59:04 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:01.313 13:59:04 -- setup/common.sh@18 -- # local node= 00:04:01.313 13:59:04 -- setup/common.sh@19 -- # local var val 00:04:01.313 13:59:04 -- setup/common.sh@20 -- # local mem_f mem 00:04:01.313 13:59:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.313 13:59:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.313 13:59:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.313 13:59:04 -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.313 13:59:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.313 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.313 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.314 13:59:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8964024 kB' 'MemAvailable: 10520604 kB' 'Buffers: 3444 kB' 'Cached: 1768892 kB' 'SwapCached: 0 kB' 'Active: 467844 kB' 'Inactive: 1422620 kB' 'Active(anon): 128620 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1422620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 344 kB' 'Writeback: 0 kB' 'AnonPages: 119780 kB' 'Mapped: 50772 kB' 'Shmem: 10492 kB' 'KReclaimable: 63324 kB' 'Slab: 161564 kB' 'SReclaimable: 63324 kB' 'SUnreclaim: 98240 kB' 'KernelStack: 6356 kB' 'PageTables: 4080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 306016 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55448 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 218988 kB' 'DirectMap2M: 6072320 kB' 'DirectMap1G: 8388608 kB' 00:04:01.314 13:59:04 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.314 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.314 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.314 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.314 13:59:04 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.314 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.314 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.314 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.314 13:59:04 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.314 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.314 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.314 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.314 13:59:04 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.314 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.314 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.314 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.314 13:59:04 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.314 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.314 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.314 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.314 13:59:04 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.314 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.314 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.314 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.314 13:59:04 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.314 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.314 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.314 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.314 13:59:04 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.314 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.314 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.314 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.314 13:59:04 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.314 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.314 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.314 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.314 13:59:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.314 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.314 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.314 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.314 13:59:04 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.314 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.314 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.314 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.314 13:59:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.314 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.314 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.314 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.314 13:59:04 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.314 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.314 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.314 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.314 13:59:04 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.314 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.314 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.314 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.314 13:59:04 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.314 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.314 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.314 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.314 13:59:04 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.314 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.314 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.314 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.314 13:59:04 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.314 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.314 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.314 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.314 13:59:04 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.314 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.314 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.314 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.314 13:59:04 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.314 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.314 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.314 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.314 13:59:04 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.314 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.314 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.314 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.314 13:59:04 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.314 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.314 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.314 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.314 13:59:04 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.314 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.314 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.314 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.314 13:59:04 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.314 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.314 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.314 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.314 13:59:04 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.314 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.314 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.314 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.314 13:59:04 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.314 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.314 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.314 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.314 13:59:04 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.314 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.314 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.314 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.314 13:59:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.314 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.314 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.314 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.315 13:59:04 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.315 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.315 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.315 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.315 13:59:04 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.315 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.315 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.315 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.315 13:59:04 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.315 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.315 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.315 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.315 13:59:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.315 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.315 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.315 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.315 13:59:04 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.315 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.315 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.315 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.315 13:59:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.315 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.315 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.315 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.315 13:59:04 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.315 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.315 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.315 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.315 13:59:04 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.315 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.315 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.315 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.315 13:59:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.315 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.315 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.315 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.315 13:59:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.315 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.315 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.315 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.315 13:59:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.315 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.315 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.315 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.315 13:59:04 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.315 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.315 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.315 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.315 13:59:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.315 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.315 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.315 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.315 13:59:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.315 13:59:04 -- setup/common.sh@33 -- # echo 0 00:04:01.315 13:59:04 -- setup/common.sh@33 -- # return 0 00:04:01.315 13:59:04 -- setup/hugepages.sh@97 -- # anon=0 00:04:01.315 13:59:04 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:01.315 13:59:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.315 13:59:04 -- setup/common.sh@18 -- # local node= 00:04:01.315 13:59:04 -- setup/common.sh@19 -- # local var val 00:04:01.315 13:59:04 -- setup/common.sh@20 -- # local mem_f mem 00:04:01.315 13:59:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.315 13:59:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.315 13:59:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.315 13:59:04 -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.315 13:59:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.315 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.315 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.315 13:59:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8964032 kB' 'MemAvailable: 10520612 kB' 'Buffers: 3444 kB' 'Cached: 1768892 kB' 'SwapCached: 0 kB' 'Active: 467356 kB' 'Inactive: 1422620 kB' 'Active(anon): 128132 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1422620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 344 kB' 'Writeback: 0 kB' 'AnonPages: 119252 kB' 'Mapped: 50772 kB' 'Shmem: 10492 kB' 'KReclaimable: 63324 kB' 'Slab: 161564 kB' 'SReclaimable: 63324 kB' 'SUnreclaim: 98240 kB' 'KernelStack: 6404 kB' 'PageTables: 4200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 306016 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 218988 kB' 'DirectMap2M: 6072320 kB' 'DirectMap1G: 8388608 kB' 00:04:01.315 13:59:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.315 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.315 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.315 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.315 13:59:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.315 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.315 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.315 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.315 13:59:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.315 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.315 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.315 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.315 13:59:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.315 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.315 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.315 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.315 13:59:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.315 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.315 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.315 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.315 13:59:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.315 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.315 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.315 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.315 13:59:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.315 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.315 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.315 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.315 13:59:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.315 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.315 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.316 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.316 13:59:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.317 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.317 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.317 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.317 13:59:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.317 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.317 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.317 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.317 13:59:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.317 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.317 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.317 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.317 13:59:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.317 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.317 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.317 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.317 13:59:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.317 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.317 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.317 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.317 13:59:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.317 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.317 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.317 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.317 13:59:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.317 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.317 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.317 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.317 13:59:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.317 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.317 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.317 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.317 13:59:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.317 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.317 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.317 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.317 13:59:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.317 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.317 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.317 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.317 13:59:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.317 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.317 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.317 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.317 13:59:04 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.317 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.317 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.317 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.317 13:59:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.317 13:59:04 -- setup/common.sh@33 -- # echo 0 00:04:01.317 13:59:04 -- setup/common.sh@33 -- # return 0 00:04:01.317 13:59:04 -- setup/hugepages.sh@99 -- # surp=0 00:04:01.317 13:59:04 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:01.317 13:59:04 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:01.317 13:59:04 -- setup/common.sh@18 -- # local node= 00:04:01.317 13:59:04 -- setup/common.sh@19 -- # local var val 00:04:01.317 13:59:04 -- setup/common.sh@20 -- # local mem_f mem 00:04:01.317 13:59:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.317 13:59:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.317 13:59:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.317 13:59:04 -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.317 13:59:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.317 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.317 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.317 13:59:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8964080 kB' 'MemAvailable: 10520660 kB' 'Buffers: 3444 kB' 'Cached: 1768892 kB' 'SwapCached: 0 kB' 'Active: 467244 kB' 'Inactive: 1422620 kB' 'Active(anon): 128020 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1422620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 344 kB' 'Writeback: 0 kB' 'AnonPages: 119124 kB' 'Mapped: 50688 kB' 'Shmem: 10492 kB' 'KReclaimable: 63324 kB' 'Slab: 161548 kB' 'SReclaimable: 63324 kB' 'SUnreclaim: 98224 kB' 'KernelStack: 6368 kB' 'PageTables: 4024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 306016 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 218988 kB' 'DirectMap2M: 6072320 kB' 'DirectMap1G: 8388608 kB' 00:04:01.317 13:59:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.317 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.317 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.317 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.317 13:59:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.317 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.317 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.317 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.317 13:59:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.317 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.317 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.317 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.317 13:59:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.317 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.317 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.317 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.317 13:59:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.317 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.317 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.317 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.317 13:59:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.317 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.317 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.317 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.317 13:59:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.317 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.317 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.317 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.317 13:59:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.317 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.317 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.317 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.317 13:59:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.317 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.317 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.317 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.317 13:59:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.317 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.317 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.317 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.317 13:59:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.317 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.317 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.317 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.317 13:59:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.317 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.318 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.318 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.318 13:59:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.318 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.318 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.318 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.318 13:59:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.318 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.318 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.318 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.318 13:59:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.318 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.318 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.318 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.318 13:59:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.318 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.318 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.318 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.318 13:59:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.318 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.318 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.318 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.318 13:59:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.318 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.318 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.318 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.318 13:59:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.318 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.318 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.318 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.318 13:59:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.318 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.318 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.318 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.318 13:59:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.318 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.318 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.318 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.318 13:59:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.318 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.318 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.318 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.318 13:59:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.318 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.318 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.318 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.318 13:59:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.318 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.318 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.318 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.318 13:59:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.318 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.318 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.318 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.318 13:59:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.318 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.318 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.318 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.318 13:59:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.318 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.318 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.318 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.318 13:59:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.318 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.318 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.318 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.318 13:59:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.318 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.318 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.318 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.318 13:59:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.318 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.318 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.318 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.318 13:59:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.318 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.318 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.318 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.318 13:59:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.318 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.318 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.318 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.318 13:59:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.318 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.318 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.318 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.318 13:59:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.318 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.318 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.318 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.318 13:59:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.318 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.318 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.318 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.318 13:59:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.318 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.318 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.318 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.318 13:59:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.318 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.318 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.318 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.318 13:59:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.318 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.318 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.318 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.318 13:59:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.318 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.318 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.318 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.318 13:59:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.318 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.318 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.318 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.318 13:59:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.318 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.318 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.319 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.319 13:59:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.319 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.319 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.319 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.319 13:59:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.319 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.319 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.319 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.319 13:59:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.319 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.319 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.319 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.319 13:59:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.319 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.319 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.319 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.319 13:59:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.319 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.319 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.319 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.319 13:59:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.319 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.319 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.319 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.319 13:59:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.319 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.319 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.319 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.319 13:59:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.319 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.319 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.319 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.319 13:59:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.319 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.319 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.319 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.319 13:59:04 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.319 13:59:04 -- setup/common.sh@33 -- # echo 0 00:04:01.319 13:59:04 -- setup/common.sh@33 -- # return 0 00:04:01.319 13:59:04 -- setup/hugepages.sh@100 -- # resv=0 00:04:01.319 nr_hugepages=512 00:04:01.319 resv_hugepages=0 00:04:01.319 13:59:04 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:01.319 13:59:04 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:01.319 surplus_hugepages=0 00:04:01.319 anon_hugepages=0 00:04:01.319 13:59:04 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:01.319 13:59:04 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:01.319 13:59:04 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:01.319 13:59:04 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:01.319 13:59:04 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:01.319 13:59:04 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:01.319 13:59:04 -- setup/common.sh@18 -- # local node= 00:04:01.319 13:59:04 -- setup/common.sh@19 -- # local var val 00:04:01.319 13:59:04 -- setup/common.sh@20 -- # local mem_f mem 00:04:01.319 13:59:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.319 13:59:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.319 13:59:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.319 13:59:04 -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.319 13:59:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.319 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.319 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.319 13:59:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8964080 kB' 'MemAvailable: 10520660 kB' 'Buffers: 3444 kB' 'Cached: 1768892 kB' 'SwapCached: 0 kB' 'Active: 466856 kB' 'Inactive: 1422620 kB' 'Active(anon): 127632 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1422620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 344 kB' 'Writeback: 0 kB' 'AnonPages: 118744 kB' 'Mapped: 50688 kB' 'Shmem: 10492 kB' 'KReclaimable: 63324 kB' 'Slab: 161540 kB' 'SReclaimable: 63324 kB' 'SUnreclaim: 98216 kB' 'KernelStack: 6336 kB' 'PageTables: 3920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 306016 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 218988 kB' 'DirectMap2M: 6072320 kB' 'DirectMap1G: 8388608 kB' 00:04:01.319 13:59:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.319 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.319 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.319 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.319 13:59:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.319 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.319 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.319 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.319 13:59:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.319 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.319 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.319 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.319 13:59:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.319 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.319 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.319 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.319 13:59:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.319 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.319 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.319 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.319 13:59:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.319 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.319 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.319 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.319 13:59:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.319 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.319 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.319 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.319 13:59:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.319 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.319 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.319 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.320 13:59:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.320 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.320 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.320 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.320 13:59:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.320 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.320 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.320 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.320 13:59:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.320 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.320 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.320 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.320 13:59:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.320 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.320 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.320 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.320 13:59:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.320 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.320 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.320 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.320 13:59:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.320 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.320 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.320 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.320 13:59:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.320 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.320 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.320 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.320 13:59:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.320 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.320 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.320 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.320 13:59:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.320 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.320 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.320 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.320 13:59:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.320 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.320 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.320 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.320 13:59:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.320 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.320 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.320 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.320 13:59:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.320 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.320 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.320 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.320 13:59:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.320 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.320 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.320 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.320 13:59:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.320 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.320 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.320 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.320 13:59:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.320 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.320 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.320 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.320 13:59:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.320 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.320 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.320 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.320 13:59:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.320 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.320 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.320 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.320 13:59:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.320 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.320 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.320 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.320 13:59:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.320 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.320 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.320 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.320 13:59:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.320 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.320 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.320 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.320 13:59:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.320 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.320 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.320 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.320 13:59:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.320 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.320 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.320 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.320 13:59:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.320 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.320 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.320 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.320 13:59:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.320 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.320 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.320 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.320 13:59:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.320 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.320 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.320 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.320 13:59:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.320 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.320 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.320 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.320 13:59:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.321 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.321 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.321 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.321 13:59:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.321 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.321 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.321 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.321 13:59:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.321 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.321 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.321 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.321 13:59:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.321 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.321 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.321 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.321 13:59:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.321 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.321 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.321 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.321 13:59:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.321 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.321 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.321 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.321 13:59:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.321 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.321 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.321 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.321 13:59:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.321 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.321 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.321 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.321 13:59:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.321 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.321 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.321 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.321 13:59:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.321 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.321 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.321 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.321 13:59:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.321 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.321 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.321 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.321 13:59:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.321 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.321 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.321 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.321 13:59:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.321 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.321 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.321 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.321 13:59:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.321 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.321 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.321 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.321 13:59:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.321 13:59:04 -- setup/common.sh@33 -- # echo 512 00:04:01.321 13:59:04 -- setup/common.sh@33 -- # return 0 00:04:01.321 13:59:04 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:01.321 13:59:04 -- setup/hugepages.sh@112 -- # get_nodes 00:04:01.321 13:59:04 -- setup/hugepages.sh@27 -- # local node 00:04:01.321 13:59:04 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.321 13:59:04 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:01.321 13:59:04 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:01.321 13:59:04 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:01.321 13:59:04 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:01.321 13:59:04 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:01.321 13:59:04 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:01.321 13:59:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.321 13:59:04 -- setup/common.sh@18 -- # local node=0 00:04:01.321 13:59:04 -- setup/common.sh@19 -- # local var val 00:04:01.321 13:59:04 -- setup/common.sh@20 -- # local mem_f mem 00:04:01.321 13:59:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.321 13:59:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:01.321 13:59:04 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:01.321 13:59:04 -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.321 13:59:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.321 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.321 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.321 13:59:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8964080 kB' 'MemUsed: 3273016 kB' 'SwapCached: 0 kB' 'Active: 467116 kB' 'Inactive: 1422620 kB' 'Active(anon): 127892 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1422620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 344 kB' 'Writeback: 0 kB' 'FilePages: 1772336 kB' 'Mapped: 50688 kB' 'AnonPages: 119004 kB' 'Shmem: 10492 kB' 'KernelStack: 6404 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63324 kB' 'Slab: 161540 kB' 'SReclaimable: 63324 kB' 'SUnreclaim: 98216 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:01.321 13:59:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.321 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.321 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.321 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.321 13:59:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.321 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.321 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.321 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.321 13:59:04 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.321 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.321 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.321 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.321 13:59:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.321 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.321 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.321 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.321 13:59:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.321 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.321 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.321 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.321 13:59:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.321 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.321 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.321 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.321 13:59:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.321 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.321 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.321 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.321 13:59:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.321 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.321 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.321 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.321 13:59:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.321 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.321 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.321 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.321 13:59:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.321 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.321 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.322 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.322 13:59:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.322 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.322 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.322 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.322 13:59:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.322 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.322 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.322 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.322 13:59:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.322 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.322 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.322 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.322 13:59:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.322 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.322 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.322 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.322 13:59:04 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.322 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.322 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.322 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.322 13:59:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.322 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.322 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.322 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.322 13:59:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.322 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.322 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.322 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.322 13:59:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.322 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.322 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.322 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.322 13:59:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.322 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.322 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.322 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.322 13:59:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.322 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.322 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.322 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.322 13:59:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.322 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.322 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.322 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.322 13:59:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.322 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.322 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.322 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.322 13:59:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.322 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.322 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.322 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.322 13:59:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.322 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.322 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.322 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.322 13:59:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.322 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.322 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.322 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.322 13:59:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.322 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.322 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.322 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.322 13:59:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.322 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.322 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.322 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.322 13:59:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.322 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.322 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.322 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.322 13:59:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.322 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.322 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.322 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.322 13:59:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.322 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.322 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.322 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.322 13:59:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.322 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.322 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.322 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.322 13:59:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.322 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.322 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.322 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.322 13:59:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.322 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.322 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.322 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.322 13:59:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.322 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.322 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.322 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.322 13:59:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.322 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.322 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.322 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.322 13:59:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.322 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.322 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.322 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.322 13:59:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.322 13:59:04 -- setup/common.sh@33 -- # echo 0 00:04:01.322 13:59:04 -- setup/common.sh@33 -- # return 0 00:04:01.322 13:59:04 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:01.322 13:59:04 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:01.322 13:59:04 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:01.322 13:59:04 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:01.322 node0=512 expecting 512 00:04:01.322 13:59:04 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:01.322 13:59:04 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:01.322 00:04:01.322 real 0m0.548s 00:04:01.322 user 0m0.237s 00:04:01.322 sys 0m0.338s 00:04:01.322 ************************************ 00:04:01.322 END TEST per_node_1G_alloc 00:04:01.322 ************************************ 00:04:01.322 13:59:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:01.322 13:59:04 -- common/autotest_common.sh@10 -- # set +x 00:04:01.584 13:59:04 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:01.584 13:59:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:01.584 13:59:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:01.584 13:59:04 -- common/autotest_common.sh@10 -- # set +x 00:04:01.584 ************************************ 00:04:01.584 START TEST even_2G_alloc 00:04:01.584 ************************************ 00:04:01.584 13:59:04 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:04:01.584 13:59:04 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:01.584 13:59:04 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:01.584 13:59:04 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:01.584 13:59:04 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:01.584 13:59:04 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:01.584 13:59:04 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:01.585 13:59:04 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:01.585 13:59:04 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:01.585 13:59:04 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:01.585 13:59:04 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:01.585 13:59:04 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:01.585 13:59:04 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:01.585 13:59:04 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:01.585 13:59:04 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:01.585 13:59:04 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:01.585 13:59:04 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:01.585 13:59:04 -- setup/hugepages.sh@83 -- # : 0 00:04:01.585 13:59:04 -- setup/hugepages.sh@84 -- # : 0 00:04:01.585 13:59:04 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:01.585 13:59:04 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:01.585 13:59:04 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:01.585 13:59:04 -- setup/hugepages.sh@153 -- # setup output 00:04:01.585 13:59:04 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.585 13:59:04 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:01.846 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:01.846 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:01.846 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:01.846 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:01.846 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:01.846 13:59:04 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:01.846 13:59:04 -- setup/hugepages.sh@89 -- # local node 00:04:01.846 13:59:04 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:01.846 13:59:04 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:01.846 13:59:04 -- setup/hugepages.sh@92 -- # local surp 00:04:01.846 13:59:04 -- setup/hugepages.sh@93 -- # local resv 00:04:01.846 13:59:04 -- setup/hugepages.sh@94 -- # local anon 00:04:01.846 13:59:04 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:01.846 13:59:04 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:01.846 13:59:04 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:01.846 13:59:04 -- setup/common.sh@18 -- # local node= 00:04:01.846 13:59:04 -- setup/common.sh@19 -- # local var val 00:04:01.846 13:59:04 -- setup/common.sh@20 -- # local mem_f mem 00:04:01.846 13:59:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.846 13:59:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.846 13:59:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.846 13:59:04 -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.846 13:59:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.846 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.846 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.846 13:59:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7915520 kB' 'MemAvailable: 9472100 kB' 'Buffers: 3444 kB' 'Cached: 1768892 kB' 'SwapCached: 0 kB' 'Active: 467468 kB' 'Inactive: 1422620 kB' 'Active(anon): 128244 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1422620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 119356 kB' 'Mapped: 50836 kB' 'Shmem: 10492 kB' 'KReclaimable: 63324 kB' 'Slab: 161636 kB' 'SReclaimable: 63324 kB' 'SUnreclaim: 98312 kB' 'KernelStack: 6376 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 306016 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55464 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 218988 kB' 'DirectMap2M: 6072320 kB' 'DirectMap1G: 8388608 kB' 00:04:01.846 13:59:04 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.846 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.846 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.846 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.846 13:59:04 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.846 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.846 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.846 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.846 13:59:04 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.846 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.846 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.846 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.846 13:59:04 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.846 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.846 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.846 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.846 13:59:04 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.846 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.846 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.846 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.846 13:59:04 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.846 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.846 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.846 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.846 13:59:04 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.846 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.846 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.846 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.846 13:59:04 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.846 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.846 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.846 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.846 13:59:04 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.846 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.846 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.846 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.846 13:59:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.846 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.846 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.846 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.846 13:59:04 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.846 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.846 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.846 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.846 13:59:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.846 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.846 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.846 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.846 13:59:04 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.846 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.846 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.846 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.846 13:59:04 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.846 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.846 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.846 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.846 13:59:04 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.846 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.846 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.846 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.846 13:59:04 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.846 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.846 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.846 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.846 13:59:04 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.846 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.846 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.846 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.846 13:59:04 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.846 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.846 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.846 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.846 13:59:04 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.846 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.846 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.846 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.846 13:59:04 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.846 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.846 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.846 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.846 13:59:04 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.846 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.847 13:59:04 -- setup/common.sh@33 -- # echo 0 00:04:01.847 13:59:04 -- setup/common.sh@33 -- # return 0 00:04:01.847 13:59:04 -- setup/hugepages.sh@97 -- # anon=0 00:04:01.847 13:59:04 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:01.847 13:59:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.847 13:59:04 -- setup/common.sh@18 -- # local node= 00:04:01.847 13:59:04 -- setup/common.sh@19 -- # local var val 00:04:01.847 13:59:04 -- setup/common.sh@20 -- # local mem_f mem 00:04:01.847 13:59:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.847 13:59:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.847 13:59:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.847 13:59:04 -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.847 13:59:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7915520 kB' 'MemAvailable: 9472100 kB' 'Buffers: 3444 kB' 'Cached: 1768892 kB' 'SwapCached: 0 kB' 'Active: 467372 kB' 'Inactive: 1422620 kB' 'Active(anon): 128148 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1422620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 119200 kB' 'Mapped: 50792 kB' 'Shmem: 10492 kB' 'KReclaimable: 63324 kB' 'Slab: 161704 kB' 'SReclaimable: 63324 kB' 'SUnreclaim: 98380 kB' 'KernelStack: 6416 kB' 'PageTables: 4168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 306016 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55448 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 218988 kB' 'DirectMap2M: 6072320 kB' 'DirectMap1G: 8388608 kB' 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.847 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.847 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.848 13:59:04 -- setup/common.sh@33 -- # echo 0 00:04:01.848 13:59:04 -- setup/common.sh@33 -- # return 0 00:04:01.848 13:59:04 -- setup/hugepages.sh@99 -- # surp=0 00:04:01.848 13:59:04 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:01.848 13:59:04 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:01.848 13:59:04 -- setup/common.sh@18 -- # local node= 00:04:01.848 13:59:04 -- setup/common.sh@19 -- # local var val 00:04:01.848 13:59:04 -- setup/common.sh@20 -- # local mem_f mem 00:04:01.848 13:59:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.848 13:59:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.848 13:59:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.848 13:59:04 -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.848 13:59:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.848 13:59:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7915520 kB' 'MemAvailable: 9472100 kB' 'Buffers: 3444 kB' 'Cached: 1768892 kB' 'SwapCached: 0 kB' 'Active: 467344 kB' 'Inactive: 1422620 kB' 'Active(anon): 128120 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1422620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 119224 kB' 'Mapped: 50792 kB' 'Shmem: 10492 kB' 'KReclaimable: 63324 kB' 'Slab: 161696 kB' 'SReclaimable: 63324 kB' 'SUnreclaim: 98372 kB' 'KernelStack: 6400 kB' 'PageTables: 4124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 306016 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55448 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 218988 kB' 'DirectMap2M: 6072320 kB' 'DirectMap1G: 8388608 kB' 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.848 13:59:04 -- setup/common.sh@33 -- # echo 0 00:04:01.848 13:59:04 -- setup/common.sh@33 -- # return 0 00:04:01.848 nr_hugepages=1024 00:04:01.848 13:59:04 -- setup/hugepages.sh@100 -- # resv=0 00:04:01.848 13:59:04 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:01.848 resv_hugepages=0 00:04:01.848 surplus_hugepages=0 00:04:01.848 13:59:04 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:01.848 13:59:04 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:01.848 anon_hugepages=0 00:04:01.848 13:59:04 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:01.848 13:59:04 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:01.848 13:59:04 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:01.848 13:59:04 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:01.848 13:59:04 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:01.848 13:59:04 -- setup/common.sh@18 -- # local node= 00:04:01.848 13:59:04 -- setup/common.sh@19 -- # local var val 00:04:01.848 13:59:04 -- setup/common.sh@20 -- # local mem_f mem 00:04:01.848 13:59:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.848 13:59:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.848 13:59:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.848 13:59:04 -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.848 13:59:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.848 13:59:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7915520 kB' 'MemAvailable: 9472100 kB' 'Buffers: 3444 kB' 'Cached: 1768892 kB' 'SwapCached: 0 kB' 'Active: 467012 kB' 'Inactive: 1422620 kB' 'Active(anon): 127788 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1422620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 118884 kB' 'Mapped: 50792 kB' 'Shmem: 10492 kB' 'KReclaimable: 63324 kB' 'Slab: 161684 kB' 'SReclaimable: 63324 kB' 'SUnreclaim: 98360 kB' 'KernelStack: 6368 kB' 'PageTables: 4020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 306016 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55448 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 218988 kB' 'DirectMap2M: 6072320 kB' 'DirectMap1G: 8388608 kB' 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.848 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.848 13:59:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # continue 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.849 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.849 13:59:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.849 13:59:04 -- setup/common.sh@33 -- # echo 1024 00:04:01.849 13:59:04 -- setup/common.sh@33 -- # return 0 00:04:02.108 13:59:04 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:02.108 13:59:04 -- setup/hugepages.sh@112 -- # get_nodes 00:04:02.108 13:59:04 -- setup/hugepages.sh@27 -- # local node 00:04:02.108 13:59:04 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.108 13:59:04 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:02.108 13:59:04 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:02.108 13:59:04 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:02.108 13:59:04 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:02.108 13:59:04 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:02.108 13:59:04 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:02.108 13:59:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.108 13:59:04 -- setup/common.sh@18 -- # local node=0 00:04:02.108 13:59:04 -- setup/common.sh@19 -- # local var val 00:04:02.108 13:59:04 -- setup/common.sh@20 -- # local mem_f mem 00:04:02.108 13:59:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.108 13:59:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:02.108 13:59:04 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:02.108 13:59:04 -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.108 13:59:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.108 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.108 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.108 13:59:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7915520 kB' 'MemUsed: 4321576 kB' 'SwapCached: 0 kB' 'Active: 467212 kB' 'Inactive: 1422620 kB' 'Active(anon): 127988 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1422620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'FilePages: 1772336 kB' 'Mapped: 50792 kB' 'AnonPages: 119032 kB' 'Shmem: 10492 kB' 'KernelStack: 6352 kB' 'PageTables: 3968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63324 kB' 'Slab: 161684 kB' 'SReclaimable: 63324 kB' 'SUnreclaim: 98360 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:02.108 13:59:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.108 13:59:04 -- setup/common.sh@32 -- # continue 00:04:02.108 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.108 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.108 13:59:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.108 13:59:04 -- setup/common.sh@32 -- # continue 00:04:02.108 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.108 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.108 13:59:04 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.108 13:59:04 -- setup/common.sh@32 -- # continue 00:04:02.108 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.108 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # continue 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # continue 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # continue 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # continue 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # continue 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # continue 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # continue 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # continue 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # continue 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # continue 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # continue 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # continue 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # continue 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # continue 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # continue 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # continue 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # continue 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # continue 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # continue 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # continue 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # continue 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # continue 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # continue 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # continue 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # continue 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # continue 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # continue 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # continue 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # continue 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # continue 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # continue 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # continue 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # continue 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.109 13:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 13:59:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 13:59:04 -- setup/common.sh@33 -- # echo 0 00:04:02.109 13:59:04 -- setup/common.sh@33 -- # return 0 00:04:02.109 13:59:04 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:02.109 13:59:04 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:02.109 13:59:04 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:02.109 13:59:04 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:02.109 node0=1024 expecting 1024 00:04:02.109 13:59:04 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:02.109 13:59:04 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:02.109 00:04:02.109 real 0m0.544s 00:04:02.109 user 0m0.222s 00:04:02.109 sys 0m0.348s 00:04:02.109 ************************************ 00:04:02.109 END TEST even_2G_alloc 00:04:02.109 ************************************ 00:04:02.109 13:59:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:02.109 13:59:04 -- common/autotest_common.sh@10 -- # set +x 00:04:02.109 13:59:04 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:02.109 13:59:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:02.109 13:59:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:02.109 13:59:04 -- common/autotest_common.sh@10 -- # set +x 00:04:02.109 ************************************ 00:04:02.109 START TEST odd_alloc 00:04:02.109 ************************************ 00:04:02.109 13:59:04 -- common/autotest_common.sh@1114 -- # odd_alloc 00:04:02.109 13:59:04 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:02.109 13:59:04 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:02.109 13:59:04 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:02.109 13:59:04 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:02.110 13:59:04 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:02.110 13:59:04 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:02.110 13:59:04 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:02.110 13:59:04 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:02.110 13:59:04 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:02.110 13:59:04 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:02.110 13:59:04 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:02.110 13:59:04 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:02.110 13:59:04 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:02.110 13:59:04 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:02.110 13:59:04 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:02.110 13:59:04 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:02.110 13:59:04 -- setup/hugepages.sh@83 -- # : 0 00:04:02.110 13:59:04 -- setup/hugepages.sh@84 -- # : 0 00:04:02.110 13:59:04 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:02.110 13:59:04 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:02.110 13:59:04 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:02.110 13:59:04 -- setup/hugepages.sh@160 -- # setup output 00:04:02.110 13:59:04 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.110 13:59:04 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:02.376 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:02.376 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:02.377 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:02.377 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:02.377 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:02.377 13:59:05 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:02.377 13:59:05 -- setup/hugepages.sh@89 -- # local node 00:04:02.377 13:59:05 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:02.377 13:59:05 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:02.377 13:59:05 -- setup/hugepages.sh@92 -- # local surp 00:04:02.377 13:59:05 -- setup/hugepages.sh@93 -- # local resv 00:04:02.377 13:59:05 -- setup/hugepages.sh@94 -- # local anon 00:04:02.377 13:59:05 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:02.377 13:59:05 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:02.377 13:59:05 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:02.377 13:59:05 -- setup/common.sh@18 -- # local node= 00:04:02.377 13:59:05 -- setup/common.sh@19 -- # local var val 00:04:02.377 13:59:05 -- setup/common.sh@20 -- # local mem_f mem 00:04:02.377 13:59:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.377 13:59:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.377 13:59:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.377 13:59:05 -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.377 13:59:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.377 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.377 13:59:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7922128 kB' 'MemAvailable: 9478708 kB' 'Buffers: 3444 kB' 'Cached: 1768892 kB' 'SwapCached: 0 kB' 'Active: 467156 kB' 'Inactive: 1422620 kB' 'Active(anon): 127932 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1422620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 119140 kB' 'Mapped: 50900 kB' 'Shmem: 10492 kB' 'KReclaimable: 63324 kB' 'Slab: 161588 kB' 'SReclaimable: 63324 kB' 'SUnreclaim: 98264 kB' 'KernelStack: 6396 kB' 'PageTables: 4100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457552 kB' 'Committed_AS: 306016 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 218988 kB' 'DirectMap2M: 6072320 kB' 'DirectMap1G: 8388608 kB' 00:04:02.377 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.377 13:59:05 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.377 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.377 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.377 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.377 13:59:05 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.377 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.377 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.377 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.377 13:59:05 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.377 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.377 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.377 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.377 13:59:05 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.377 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.377 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.377 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.377 13:59:05 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.377 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.377 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.377 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.377 13:59:05 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.377 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.377 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.377 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.377 13:59:05 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.377 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.377 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.377 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.377 13:59:05 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.377 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.377 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.377 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.377 13:59:05 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.377 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.377 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.377 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.377 13:59:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.377 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.377 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.377 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.377 13:59:05 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.377 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.377 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.377 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.377 13:59:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.377 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.377 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.377 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.377 13:59:05 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.377 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.377 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.377 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.377 13:59:05 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.377 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.377 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.377 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.377 13:59:05 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.377 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.377 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.377 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.377 13:59:05 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.377 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.377 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.377 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.377 13:59:05 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.377 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.377 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.377 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.377 13:59:05 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.377 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.377 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.378 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.378 13:59:05 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.378 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.378 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.378 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.378 13:59:05 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.378 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.378 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.378 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.378 13:59:05 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.378 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.378 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.378 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.378 13:59:05 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.378 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.378 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.378 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.378 13:59:05 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.378 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.378 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.378 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.378 13:59:05 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.378 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.378 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.378 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.378 13:59:05 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.378 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.378 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.378 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.378 13:59:05 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.378 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.378 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.378 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.378 13:59:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.378 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.378 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.378 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.378 13:59:05 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.378 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.378 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.378 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.378 13:59:05 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.378 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.643 13:59:05 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.643 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.643 13:59:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.643 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.643 13:59:05 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.643 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.643 13:59:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.643 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.643 13:59:05 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.643 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.643 13:59:05 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.643 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.643 13:59:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.643 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.643 13:59:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.643 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.643 13:59:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.643 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.643 13:59:05 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.643 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.643 13:59:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.643 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.643 13:59:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.643 13:59:05 -- setup/common.sh@33 -- # echo 0 00:04:02.643 13:59:05 -- setup/common.sh@33 -- # return 0 00:04:02.643 13:59:05 -- setup/hugepages.sh@97 -- # anon=0 00:04:02.643 13:59:05 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:02.643 13:59:05 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.643 13:59:05 -- setup/common.sh@18 -- # local node= 00:04:02.643 13:59:05 -- setup/common.sh@19 -- # local var val 00:04:02.643 13:59:05 -- setup/common.sh@20 -- # local mem_f mem 00:04:02.643 13:59:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.643 13:59:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.643 13:59:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.643 13:59:05 -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.643 13:59:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.643 13:59:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7922128 kB' 'MemAvailable: 9478712 kB' 'Buffers: 3444 kB' 'Cached: 1768896 kB' 'SwapCached: 0 kB' 'Active: 467040 kB' 'Inactive: 1422624 kB' 'Active(anon): 127816 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1422624 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 119156 kB' 'Mapped: 50688 kB' 'Shmem: 10492 kB' 'KReclaimable: 63324 kB' 'Slab: 161624 kB' 'SReclaimable: 63324 kB' 'SUnreclaim: 98300 kB' 'KernelStack: 6384 kB' 'PageTables: 4064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457552 kB' 'Committed_AS: 306016 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55400 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 218988 kB' 'DirectMap2M: 6072320 kB' 'DirectMap1G: 8388608 kB' 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.643 13:59:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.643 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.643 13:59:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.643 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.643 13:59:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.643 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.643 13:59:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.643 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.643 13:59:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.643 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.643 13:59:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.643 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.643 13:59:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.643 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.643 13:59:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.643 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.643 13:59:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.643 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.643 13:59:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.643 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.643 13:59:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.643 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.643 13:59:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.643 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.643 13:59:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.643 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.643 13:59:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.643 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.643 13:59:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.643 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.643 13:59:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.643 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.643 13:59:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.643 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.643 13:59:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.643 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.643 13:59:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.643 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.643 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.644 13:59:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.644 13:59:05 -- setup/common.sh@33 -- # echo 0 00:04:02.644 13:59:05 -- setup/common.sh@33 -- # return 0 00:04:02.644 13:59:05 -- setup/hugepages.sh@99 -- # surp=0 00:04:02.644 13:59:05 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:02.644 13:59:05 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:02.644 13:59:05 -- setup/common.sh@18 -- # local node= 00:04:02.644 13:59:05 -- setup/common.sh@19 -- # local var val 00:04:02.644 13:59:05 -- setup/common.sh@20 -- # local mem_f mem 00:04:02.644 13:59:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.644 13:59:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.644 13:59:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.644 13:59:05 -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.644 13:59:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.644 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.644 13:59:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7922128 kB' 'MemAvailable: 9478712 kB' 'Buffers: 3444 kB' 'Cached: 1768896 kB' 'SwapCached: 0 kB' 'Active: 467040 kB' 'Inactive: 1422624 kB' 'Active(anon): 127816 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1422624 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 118896 kB' 'Mapped: 50688 kB' 'Shmem: 10492 kB' 'KReclaimable: 63324 kB' 'Slab: 161624 kB' 'SReclaimable: 63324 kB' 'SUnreclaim: 98300 kB' 'KernelStack: 6384 kB' 'PageTables: 4064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457552 kB' 'Committed_AS: 306016 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55416 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 218988 kB' 'DirectMap2M: 6072320 kB' 'DirectMap1G: 8388608 kB' 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.645 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.645 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.646 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.646 13:59:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.646 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.646 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.646 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.646 13:59:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.646 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.646 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.646 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.646 13:59:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.646 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.646 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.646 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.646 13:59:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.646 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.646 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.646 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.646 13:59:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.646 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.646 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.646 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.646 13:59:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.646 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.646 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.646 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.646 13:59:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.646 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.646 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.646 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.646 13:59:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.646 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.646 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.646 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.646 13:59:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.646 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.646 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.646 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.646 13:59:05 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.646 13:59:05 -- setup/common.sh@33 -- # echo 0 00:04:02.646 13:59:05 -- setup/common.sh@33 -- # return 0 00:04:02.646 13:59:05 -- setup/hugepages.sh@100 -- # resv=0 00:04:02.646 nr_hugepages=1025 00:04:02.646 13:59:05 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:02.646 resv_hugepages=0 00:04:02.646 13:59:05 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:02.646 surplus_hugepages=0 00:04:02.646 13:59:05 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:02.646 anon_hugepages=0 00:04:02.646 13:59:05 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:02.646 13:59:05 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:02.646 13:59:05 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:02.646 13:59:05 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:02.646 13:59:05 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:02.646 13:59:05 -- setup/common.sh@18 -- # local node= 00:04:02.646 13:59:05 -- setup/common.sh@19 -- # local var val 00:04:02.646 13:59:05 -- setup/common.sh@20 -- # local mem_f mem 00:04:02.646 13:59:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.646 13:59:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.646 13:59:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.646 13:59:05 -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.646 13:59:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.646 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.646 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.646 13:59:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7922128 kB' 'MemAvailable: 9478712 kB' 'Buffers: 3444 kB' 'Cached: 1768896 kB' 'SwapCached: 0 kB' 'Active: 467300 kB' 'Inactive: 1422624 kB' 'Active(anon): 128076 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1422624 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 119156 kB' 'Mapped: 50688 kB' 'Shmem: 10492 kB' 'KReclaimable: 63324 kB' 'Slab: 161624 kB' 'SReclaimable: 63324 kB' 'SUnreclaim: 98300 kB' 'KernelStack: 6384 kB' 'PageTables: 4064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457552 kB' 'Committed_AS: 306016 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55416 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 218988 kB' 'DirectMap2M: 6072320 kB' 'DirectMap1G: 8388608 kB' 00:04:02.646 13:59:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.646 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.646 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.646 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.646 13:59:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.646 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.646 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.646 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.646 13:59:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.646 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.646 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.646 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.646 13:59:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.646 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.646 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.646 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.646 13:59:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.646 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.646 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.646 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.646 13:59:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.646 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.646 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.646 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.646 13:59:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.646 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.646 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.646 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.646 13:59:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.646 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.646 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.646 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.646 13:59:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.646 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.646 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.646 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.646 13:59:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.646 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.646 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.646 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.646 13:59:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.646 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.646 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.646 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.646 13:59:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.646 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.646 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.646 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.646 13:59:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.646 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.646 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.646 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.646 13:59:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.646 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.646 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.646 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.646 13:59:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.646 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.646 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.646 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.646 13:59:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.646 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.646 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.646 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.646 13:59:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.646 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.646 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.646 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.646 13:59:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.646 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.646 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.646 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.646 13:59:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.646 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.646 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.646 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.646 13:59:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.646 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.646 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.646 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.646 13:59:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.647 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.647 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.647 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.647 13:59:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.647 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.647 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.647 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.647 13:59:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.647 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.647 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.647 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.647 13:59:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.647 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.647 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.647 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.647 13:59:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.647 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.647 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.647 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.647 13:59:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.647 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.647 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.647 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.647 13:59:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.647 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.647 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.647 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.647 13:59:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.647 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.647 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.647 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.647 13:59:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.647 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.647 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.647 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.647 13:59:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.647 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.647 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.647 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.647 13:59:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.647 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.647 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.647 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.647 13:59:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.647 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.647 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.647 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.647 13:59:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.647 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.647 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.647 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.647 13:59:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.647 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.647 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.647 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.647 13:59:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.647 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.647 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.647 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.647 13:59:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.647 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.647 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.647 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.647 13:59:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.647 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.647 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.647 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.647 13:59:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.647 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.647 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.647 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.647 13:59:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.647 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.647 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.647 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.647 13:59:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.647 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.647 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.647 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.647 13:59:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.647 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.647 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.647 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.647 13:59:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.647 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.647 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.647 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.647 13:59:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.647 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.647 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.647 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.647 13:59:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.647 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.647 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.647 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.647 13:59:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.647 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.647 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.647 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.647 13:59:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.647 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.647 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.647 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.647 13:59:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.647 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.647 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.647 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.647 13:59:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.647 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.647 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.647 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.647 13:59:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.647 13:59:05 -- setup/common.sh@33 -- # echo 1025 00:04:02.647 13:59:05 -- setup/common.sh@33 -- # return 0 00:04:02.647 13:59:05 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:02.647 13:59:05 -- setup/hugepages.sh@112 -- # get_nodes 00:04:02.647 13:59:05 -- setup/hugepages.sh@27 -- # local node 00:04:02.647 13:59:05 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.647 13:59:05 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:02.647 13:59:05 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:02.647 13:59:05 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:02.647 13:59:05 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:02.647 13:59:05 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:02.647 13:59:05 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:02.647 13:59:05 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.647 13:59:05 -- setup/common.sh@18 -- # local node=0 00:04:02.647 13:59:05 -- setup/common.sh@19 -- # local var val 00:04:02.647 13:59:05 -- setup/common.sh@20 -- # local mem_f mem 00:04:02.647 13:59:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.647 13:59:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:02.647 13:59:05 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:02.647 13:59:05 -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.647 13:59:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.647 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.647 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 13:59:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7922128 kB' 'MemUsed: 4314968 kB' 'SwapCached: 0 kB' 'Active: 467036 kB' 'Inactive: 1422624 kB' 'Active(anon): 127812 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1422624 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'FilePages: 1772340 kB' 'Mapped: 50688 kB' 'AnonPages: 118968 kB' 'Shmem: 10492 kB' 'KernelStack: 6368 kB' 'PageTables: 4032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63324 kB' 'Slab: 161612 kB' 'SReclaimable: 63324 kB' 'SUnreclaim: 98288 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # continue 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 13:59:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.648 13:59:05 -- setup/common.sh@33 -- # echo 0 00:04:02.648 13:59:05 -- setup/common.sh@33 -- # return 0 00:04:02.648 13:59:05 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:02.648 13:59:05 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:02.648 13:59:05 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:02.648 13:59:05 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:02.648 node0=1025 expecting 1025 00:04:02.648 13:59:05 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:02.648 13:59:05 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:02.648 00:04:02.648 real 0m0.545s 00:04:02.648 user 0m0.238s 00:04:02.648 sys 0m0.338s 00:04:02.648 13:59:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:02.648 13:59:05 -- common/autotest_common.sh@10 -- # set +x 00:04:02.649 ************************************ 00:04:02.649 END TEST odd_alloc 00:04:02.649 ************************************ 00:04:02.649 13:59:05 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:02.649 13:59:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:02.649 13:59:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:02.649 13:59:05 -- common/autotest_common.sh@10 -- # set +x 00:04:02.649 ************************************ 00:04:02.649 START TEST custom_alloc 00:04:02.649 ************************************ 00:04:02.649 13:59:05 -- common/autotest_common.sh@1114 -- # custom_alloc 00:04:02.649 13:59:05 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:02.649 13:59:05 -- setup/hugepages.sh@169 -- # local node 00:04:02.649 13:59:05 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:02.649 13:59:05 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:02.649 13:59:05 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:02.649 13:59:05 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:02.649 13:59:05 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:02.649 13:59:05 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:02.649 13:59:05 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:02.649 13:59:05 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:02.649 13:59:05 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:02.649 13:59:05 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:02.649 13:59:05 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:02.649 13:59:05 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:02.649 13:59:05 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:02.649 13:59:05 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:02.649 13:59:05 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:02.649 13:59:05 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:02.649 13:59:05 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:02.649 13:59:05 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:02.649 13:59:05 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:02.649 13:59:05 -- setup/hugepages.sh@83 -- # : 0 00:04:02.649 13:59:05 -- setup/hugepages.sh@84 -- # : 0 00:04:02.649 13:59:05 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:02.649 13:59:05 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:02.649 13:59:05 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:02.649 13:59:05 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:02.649 13:59:05 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:02.649 13:59:05 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:02.649 13:59:05 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:02.649 13:59:05 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:02.649 13:59:05 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:02.649 13:59:05 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:02.649 13:59:05 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:02.649 13:59:05 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:02.649 13:59:05 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:02.649 13:59:05 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:02.649 13:59:05 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:02.649 13:59:05 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:02.649 13:59:05 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:02.649 13:59:05 -- setup/hugepages.sh@78 -- # return 0 00:04:02.649 13:59:05 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:02.649 13:59:05 -- setup/hugepages.sh@187 -- # setup output 00:04:02.649 13:59:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.649 13:59:05 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:03.223 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:03.223 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:03.223 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:03.223 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:03.223 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:03.223 13:59:05 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:03.223 13:59:05 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:03.223 13:59:05 -- setup/hugepages.sh@89 -- # local node 00:04:03.223 13:59:05 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:03.223 13:59:05 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:03.223 13:59:05 -- setup/hugepages.sh@92 -- # local surp 00:04:03.223 13:59:05 -- setup/hugepages.sh@93 -- # local resv 00:04:03.223 13:59:05 -- setup/hugepages.sh@94 -- # local anon 00:04:03.223 13:59:05 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:03.223 13:59:05 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:03.223 13:59:05 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:03.223 13:59:05 -- setup/common.sh@18 -- # local node= 00:04:03.223 13:59:05 -- setup/common.sh@19 -- # local var val 00:04:03.223 13:59:05 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.223 13:59:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.223 13:59:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.223 13:59:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.223 13:59:05 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.223 13:59:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.223 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.223 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.223 13:59:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8973148 kB' 'MemAvailable: 10529732 kB' 'Buffers: 3444 kB' 'Cached: 1768896 kB' 'SwapCached: 0 kB' 'Active: 467840 kB' 'Inactive: 1422624 kB' 'Active(anon): 128616 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1422624 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 119748 kB' 'Mapped: 51020 kB' 'Shmem: 10492 kB' 'KReclaimable: 63324 kB' 'Slab: 161664 kB' 'SReclaimable: 63324 kB' 'SUnreclaim: 98340 kB' 'KernelStack: 6588 kB' 'PageTables: 4608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 306016 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 218988 kB' 'DirectMap2M: 6072320 kB' 'DirectMap1G: 8388608 kB' 00:04:03.223 13:59:05 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.223 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.223 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.223 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.223 13:59:05 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.223 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.223 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.223 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.223 13:59:05 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.223 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.223 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.223 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.223 13:59:05 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.223 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.223 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.223 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.223 13:59:05 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.223 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.223 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.223 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.223 13:59:05 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.223 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.223 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.223 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.223 13:59:05 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.223 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.223 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.223 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.223 13:59:05 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.223 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.223 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.223 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.223 13:59:05 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.223 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.223 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.223 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.223 13:59:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.223 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.223 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.223 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.223 13:59:05 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.223 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.223 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.223 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.223 13:59:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.223 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.223 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.223 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.223 13:59:05 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.223 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.223 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.223 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.223 13:59:05 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.223 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.223 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.223 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.223 13:59:05 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.223 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.223 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.223 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.223 13:59:05 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.223 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.223 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.223 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.223 13:59:05 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.224 13:59:05 -- setup/common.sh@33 -- # echo 0 00:04:03.224 13:59:05 -- setup/common.sh@33 -- # return 0 00:04:03.224 13:59:05 -- setup/hugepages.sh@97 -- # anon=0 00:04:03.224 13:59:05 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:03.224 13:59:05 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.224 13:59:05 -- setup/common.sh@18 -- # local node= 00:04:03.224 13:59:05 -- setup/common.sh@19 -- # local var val 00:04:03.224 13:59:05 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.224 13:59:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.224 13:59:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.224 13:59:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.224 13:59:05 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.224 13:59:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.224 13:59:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8977216 kB' 'MemAvailable: 10533800 kB' 'Buffers: 3444 kB' 'Cached: 1768896 kB' 'SwapCached: 0 kB' 'Active: 467232 kB' 'Inactive: 1422624 kB' 'Active(anon): 128008 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1422624 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 119112 kB' 'Mapped: 50868 kB' 'Shmem: 10492 kB' 'KReclaimable: 63324 kB' 'Slab: 161672 kB' 'SReclaimable: 63324 kB' 'SUnreclaim: 98348 kB' 'KernelStack: 6396 kB' 'PageTables: 4088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 306016 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 218988 kB' 'DirectMap2M: 6072320 kB' 'DirectMap1G: 8388608 kB' 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.224 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.224 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.225 13:59:05 -- setup/common.sh@33 -- # echo 0 00:04:03.225 13:59:05 -- setup/common.sh@33 -- # return 0 00:04:03.225 13:59:05 -- setup/hugepages.sh@99 -- # surp=0 00:04:03.225 13:59:05 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:03.225 13:59:05 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:03.225 13:59:05 -- setup/common.sh@18 -- # local node= 00:04:03.225 13:59:05 -- setup/common.sh@19 -- # local var val 00:04:03.225 13:59:05 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.225 13:59:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.225 13:59:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.225 13:59:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.225 13:59:05 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.225 13:59:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 13:59:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8975960 kB' 'MemAvailable: 10532544 kB' 'Buffers: 3444 kB' 'Cached: 1768896 kB' 'SwapCached: 0 kB' 'Active: 467160 kB' 'Inactive: 1422624 kB' 'Active(anon): 127936 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1422624 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 119024 kB' 'Mapped: 50812 kB' 'Shmem: 10492 kB' 'KReclaimable: 63324 kB' 'Slab: 161664 kB' 'SReclaimable: 63324 kB' 'SUnreclaim: 98340 kB' 'KernelStack: 6332 kB' 'PageTables: 3912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 306016 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 218988 kB' 'DirectMap2M: 6072320 kB' 'DirectMap1G: 8388608 kB' 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.225 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.225 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 13:59:05 -- setup/common.sh@33 -- # echo 0 00:04:03.226 13:59:05 -- setup/common.sh@33 -- # return 0 00:04:03.226 13:59:05 -- setup/hugepages.sh@100 -- # resv=0 00:04:03.226 nr_hugepages=512 00:04:03.226 resv_hugepages=0 00:04:03.226 13:59:05 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:03.226 13:59:05 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:03.226 surplus_hugepages=0 00:04:03.226 13:59:05 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:03.226 anon_hugepages=0 00:04:03.226 13:59:05 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:03.226 13:59:05 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:03.226 13:59:05 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:03.226 13:59:05 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:03.226 13:59:05 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:03.226 13:59:05 -- setup/common.sh@18 -- # local node= 00:04:03.226 13:59:05 -- setup/common.sh@19 -- # local var val 00:04:03.226 13:59:05 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.226 13:59:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.226 13:59:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.226 13:59:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.226 13:59:05 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.226 13:59:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 13:59:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8975960 kB' 'MemAvailable: 10532544 kB' 'Buffers: 3444 kB' 'Cached: 1768896 kB' 'SwapCached: 0 kB' 'Active: 467368 kB' 'Inactive: 1422624 kB' 'Active(anon): 128144 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1422624 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 118972 kB' 'Mapped: 50812 kB' 'Shmem: 10492 kB' 'KReclaimable: 63324 kB' 'Slab: 161664 kB' 'SReclaimable: 63324 kB' 'SUnreclaim: 98340 kB' 'KernelStack: 6316 kB' 'PageTables: 3860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 306016 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 218988 kB' 'DirectMap2M: 6072320 kB' 'DirectMap1G: 8388608 kB' 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.226 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.228 13:59:05 -- setup/common.sh@33 -- # echo 512 00:04:03.228 13:59:05 -- setup/common.sh@33 -- # return 0 00:04:03.228 13:59:05 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:03.228 13:59:05 -- setup/hugepages.sh@112 -- # get_nodes 00:04:03.228 13:59:05 -- setup/hugepages.sh@27 -- # local node 00:04:03.228 13:59:05 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.228 13:59:05 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:03.228 13:59:05 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:03.228 13:59:05 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:03.228 13:59:05 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:03.228 13:59:05 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:03.228 13:59:05 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:03.228 13:59:05 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.228 13:59:05 -- setup/common.sh@18 -- # local node=0 00:04:03.228 13:59:05 -- setup/common.sh@19 -- # local var val 00:04:03.228 13:59:05 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.228 13:59:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.228 13:59:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:03.228 13:59:05 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:03.228 13:59:05 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.228 13:59:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.228 13:59:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8975960 kB' 'MemUsed: 3261136 kB' 'SwapCached: 0 kB' 'Active: 467712 kB' 'Inactive: 1422624 kB' 'Active(anon): 128488 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1422624 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'FilePages: 1772340 kB' 'Mapped: 51072 kB' 'AnonPages: 119596 kB' 'Shmem: 10492 kB' 'KernelStack: 6348 kB' 'PageTables: 3940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63324 kB' 'Slab: 161664 kB' 'SReclaimable: 63324 kB' 'SUnreclaim: 98340 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # continue 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.228 13:59:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.228 13:59:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.228 13:59:05 -- setup/common.sh@33 -- # echo 0 00:04:03.228 13:59:05 -- setup/common.sh@33 -- # return 0 00:04:03.228 13:59:05 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:03.228 13:59:05 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:03.228 13:59:05 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:03.228 13:59:05 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:03.228 13:59:05 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:03.228 node0=512 expecting 512 00:04:03.228 13:59:05 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:03.228 00:04:03.228 real 0m0.560s 00:04:03.228 user 0m0.249s 00:04:03.228 sys 0m0.339s 00:04:03.228 13:59:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:03.228 ************************************ 00:04:03.228 END TEST custom_alloc 00:04:03.228 ************************************ 00:04:03.228 13:59:05 -- common/autotest_common.sh@10 -- # set +x 00:04:03.228 13:59:06 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:03.228 13:59:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:03.228 13:59:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:03.229 13:59:06 -- common/autotest_common.sh@10 -- # set +x 00:04:03.229 ************************************ 00:04:03.229 START TEST no_shrink_alloc 00:04:03.229 ************************************ 00:04:03.229 13:59:06 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:04:03.229 13:59:06 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:03.229 13:59:06 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:03.229 13:59:06 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:03.229 13:59:06 -- setup/hugepages.sh@51 -- # shift 00:04:03.229 13:59:06 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:03.229 13:59:06 -- setup/hugepages.sh@52 -- # local node_ids 00:04:03.229 13:59:06 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:03.229 13:59:06 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:03.229 13:59:06 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:03.229 13:59:06 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:03.229 13:59:06 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:03.229 13:59:06 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:03.229 13:59:06 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:03.229 13:59:06 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:03.229 13:59:06 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:03.229 13:59:06 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:03.229 13:59:06 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:03.229 13:59:06 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:03.229 13:59:06 -- setup/hugepages.sh@73 -- # return 0 00:04:03.229 13:59:06 -- setup/hugepages.sh@198 -- # setup output 00:04:03.229 13:59:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.229 13:59:06 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:03.802 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:03.802 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:03.802 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:03.802 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:03.802 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:03.802 13:59:06 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:03.802 13:59:06 -- setup/hugepages.sh@89 -- # local node 00:04:03.802 13:59:06 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:03.802 13:59:06 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:03.802 13:59:06 -- setup/hugepages.sh@92 -- # local surp 00:04:03.802 13:59:06 -- setup/hugepages.sh@93 -- # local resv 00:04:03.802 13:59:06 -- setup/hugepages.sh@94 -- # local anon 00:04:03.802 13:59:06 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:03.802 13:59:06 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:03.802 13:59:06 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:03.802 13:59:06 -- setup/common.sh@18 -- # local node= 00:04:03.802 13:59:06 -- setup/common.sh@19 -- # local var val 00:04:03.802 13:59:06 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.802 13:59:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.802 13:59:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.802 13:59:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.802 13:59:06 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.802 13:59:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.802 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.802 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.802 13:59:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7927060 kB' 'MemAvailable: 9483644 kB' 'Buffers: 3444 kB' 'Cached: 1768896 kB' 'SwapCached: 0 kB' 'Active: 467336 kB' 'Inactive: 1422624 kB' 'Active(anon): 128112 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1422624 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 119468 kB' 'Mapped: 50752 kB' 'Shmem: 10492 kB' 'KReclaimable: 63324 kB' 'Slab: 161664 kB' 'SReclaimable: 63324 kB' 'SUnreclaim: 98340 kB' 'KernelStack: 6480 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 306216 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55464 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 218988 kB' 'DirectMap2M: 6072320 kB' 'DirectMap1G: 8388608 kB' 00:04:03.802 13:59:06 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.802 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.802 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.802 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.802 13:59:06 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.802 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.802 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.802 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.802 13:59:06 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.802 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.802 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.802 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.802 13:59:06 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.802 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.802 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.802 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.802 13:59:06 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.802 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.802 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.802 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.802 13:59:06 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.802 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.802 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.802 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.802 13:59:06 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.802 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.802 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.802 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.802 13:59:06 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.802 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.802 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.802 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.802 13:59:06 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.802 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.802 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.802 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.802 13:59:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.802 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.802 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.802 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.802 13:59:06 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.802 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.802 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.802 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.802 13:59:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.802 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.802 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.802 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.802 13:59:06 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.802 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.802 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.802 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.802 13:59:06 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.802 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.802 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.802 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.802 13:59:06 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.802 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.802 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.802 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.802 13:59:06 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.802 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.802 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.802 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.802 13:59:06 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.802 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.802 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.802 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.802 13:59:06 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.802 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.803 13:59:06 -- setup/common.sh@33 -- # echo 0 00:04:03.803 13:59:06 -- setup/common.sh@33 -- # return 0 00:04:03.803 13:59:06 -- setup/hugepages.sh@97 -- # anon=0 00:04:03.803 13:59:06 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:03.803 13:59:06 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.803 13:59:06 -- setup/common.sh@18 -- # local node= 00:04:03.803 13:59:06 -- setup/common.sh@19 -- # local var val 00:04:03.803 13:59:06 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.803 13:59:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.803 13:59:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.803 13:59:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.803 13:59:06 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.803 13:59:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.803 13:59:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7927060 kB' 'MemAvailable: 9483644 kB' 'Buffers: 3444 kB' 'Cached: 1768896 kB' 'SwapCached: 0 kB' 'Active: 466932 kB' 'Inactive: 1422624 kB' 'Active(anon): 127708 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1422624 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 118972 kB' 'Mapped: 50852 kB' 'Shmem: 10492 kB' 'KReclaimable: 63324 kB' 'Slab: 161632 kB' 'SReclaimable: 63324 kB' 'SUnreclaim: 98308 kB' 'KernelStack: 6352 kB' 'PageTables: 3976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 306216 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 218988 kB' 'DirectMap2M: 6072320 kB' 'DirectMap1G: 8388608 kB' 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.803 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.803 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.804 13:59:06 -- setup/common.sh@33 -- # echo 0 00:04:03.804 13:59:06 -- setup/common.sh@33 -- # return 0 00:04:03.804 13:59:06 -- setup/hugepages.sh@99 -- # surp=0 00:04:03.804 13:59:06 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:03.804 13:59:06 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:03.804 13:59:06 -- setup/common.sh@18 -- # local node= 00:04:03.804 13:59:06 -- setup/common.sh@19 -- # local var val 00:04:03.804 13:59:06 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.804 13:59:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.804 13:59:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.804 13:59:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.804 13:59:06 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.804 13:59:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.804 13:59:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7927060 kB' 'MemAvailable: 9483644 kB' 'Buffers: 3444 kB' 'Cached: 1768896 kB' 'SwapCached: 0 kB' 'Active: 467148 kB' 'Inactive: 1422624 kB' 'Active(anon): 127924 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1422624 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 118976 kB' 'Mapped: 50860 kB' 'Shmem: 10492 kB' 'KReclaimable: 63324 kB' 'Slab: 161628 kB' 'SReclaimable: 63324 kB' 'SUnreclaim: 98304 kB' 'KernelStack: 6352 kB' 'PageTables: 3980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 306216 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55448 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 218988 kB' 'DirectMap2M: 6072320 kB' 'DirectMap1G: 8388608 kB' 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.804 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.804 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.805 13:59:06 -- setup/common.sh@33 -- # echo 0 00:04:03.805 13:59:06 -- setup/common.sh@33 -- # return 0 00:04:03.805 13:59:06 -- setup/hugepages.sh@100 -- # resv=0 00:04:03.805 nr_hugepages=1024 00:04:03.805 resv_hugepages=0 00:04:03.805 surplus_hugepages=0 00:04:03.805 anon_hugepages=0 00:04:03.805 13:59:06 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:03.805 13:59:06 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:03.805 13:59:06 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:03.805 13:59:06 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:03.805 13:59:06 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:03.805 13:59:06 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:03.805 13:59:06 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:03.805 13:59:06 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:03.805 13:59:06 -- setup/common.sh@18 -- # local node= 00:04:03.805 13:59:06 -- setup/common.sh@19 -- # local var val 00:04:03.805 13:59:06 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.805 13:59:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.805 13:59:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.805 13:59:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.805 13:59:06 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.805 13:59:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.805 13:59:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7927060 kB' 'MemAvailable: 9483644 kB' 'Buffers: 3444 kB' 'Cached: 1768896 kB' 'SwapCached: 0 kB' 'Active: 467088 kB' 'Inactive: 1422624 kB' 'Active(anon): 127864 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1422624 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 118908 kB' 'Mapped: 50860 kB' 'Shmem: 10492 kB' 'KReclaimable: 63324 kB' 'Slab: 161620 kB' 'SReclaimable: 63324 kB' 'SUnreclaim: 98296 kB' 'KernelStack: 6320 kB' 'PageTables: 3876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 306216 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 218988 kB' 'DirectMap2M: 6072320 kB' 'DirectMap1G: 8388608 kB' 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.805 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.805 13:59:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.806 13:59:06 -- setup/common.sh@33 -- # echo 1024 00:04:03.806 13:59:06 -- setup/common.sh@33 -- # return 0 00:04:03.806 13:59:06 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:03.806 13:59:06 -- setup/hugepages.sh@112 -- # get_nodes 00:04:03.806 13:59:06 -- setup/hugepages.sh@27 -- # local node 00:04:03.806 13:59:06 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.806 13:59:06 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:03.806 13:59:06 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:03.806 13:59:06 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:03.806 13:59:06 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:03.806 13:59:06 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:03.806 13:59:06 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:03.806 13:59:06 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.806 13:59:06 -- setup/common.sh@18 -- # local node=0 00:04:03.806 13:59:06 -- setup/common.sh@19 -- # local var val 00:04:03.806 13:59:06 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.806 13:59:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.806 13:59:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:03.806 13:59:06 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:03.806 13:59:06 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.806 13:59:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.806 13:59:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7927060 kB' 'MemUsed: 4310036 kB' 'SwapCached: 0 kB' 'Active: 466792 kB' 'Inactive: 1422624 kB' 'Active(anon): 127568 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1422624 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1772340 kB' 'Mapped: 50860 kB' 'AnonPages: 118612 kB' 'Shmem: 10492 kB' 'KernelStack: 6372 kB' 'PageTables: 3824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63324 kB' 'Slab: 161620 kB' 'SReclaimable: 63324 kB' 'SUnreclaim: 98296 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.806 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.806 13:59:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.807 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.807 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.807 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.807 13:59:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.807 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.807 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.807 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.807 13:59:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.807 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.807 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.807 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.807 13:59:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.807 13:59:06 -- setup/common.sh@32 -- # continue 00:04:03.807 13:59:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.807 13:59:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.807 13:59:06 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.807 13:59:06 -- setup/common.sh@33 -- # echo 0 00:04:03.807 13:59:06 -- setup/common.sh@33 -- # return 0 00:04:03.807 13:59:06 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:03.807 13:59:06 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:03.807 13:59:06 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:03.807 13:59:06 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:03.807 node0=1024 expecting 1024 00:04:03.807 13:59:06 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:03.807 13:59:06 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:03.807 13:59:06 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:03.807 13:59:06 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:03.807 13:59:06 -- setup/hugepages.sh@202 -- # setup output 00:04:03.807 13:59:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.807 13:59:06 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:04.379 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:04.379 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:04.379 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:04.379 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:04.379 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:04.379 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:04.379 13:59:07 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:04.379 13:59:07 -- setup/hugepages.sh@89 -- # local node 00:04:04.379 13:59:07 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:04.379 13:59:07 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:04.379 13:59:07 -- setup/hugepages.sh@92 -- # local surp 00:04:04.379 13:59:07 -- setup/hugepages.sh@93 -- # local resv 00:04:04.379 13:59:07 -- setup/hugepages.sh@94 -- # local anon 00:04:04.379 13:59:07 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:04.379 13:59:07 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:04.379 13:59:07 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:04.379 13:59:07 -- setup/common.sh@18 -- # local node= 00:04:04.379 13:59:07 -- setup/common.sh@19 -- # local var val 00:04:04.379 13:59:07 -- setup/common.sh@20 -- # local mem_f mem 00:04:04.379 13:59:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.379 13:59:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.379 13:59:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.379 13:59:07 -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.379 13:59:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.379 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.379 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.379 13:59:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7924072 kB' 'MemAvailable: 9480656 kB' 'Buffers: 3444 kB' 'Cached: 1768896 kB' 'SwapCached: 0 kB' 'Active: 467896 kB' 'Inactive: 1422624 kB' 'Active(anon): 128672 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1422624 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119776 kB' 'Mapped: 50860 kB' 'Shmem: 10492 kB' 'KReclaimable: 63324 kB' 'Slab: 161692 kB' 'SReclaimable: 63324 kB' 'SUnreclaim: 98368 kB' 'KernelStack: 6440 kB' 'PageTables: 3848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 306216 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55464 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 218988 kB' 'DirectMap2M: 6072320 kB' 'DirectMap1G: 8388608 kB' 00:04:04.379 13:59:07 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.379 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.379 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.379 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.379 13:59:07 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.379 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.379 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.379 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.379 13:59:07 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.379 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.379 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.379 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.379 13:59:07 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.379 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.379 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.379 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.379 13:59:07 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.379 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.379 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.379 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.379 13:59:07 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.379 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.379 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.379 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.379 13:59:07 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.379 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.379 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.379 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.379 13:59:07 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.379 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.379 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.379 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.379 13:59:07 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.379 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.379 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.379 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.379 13:59:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.379 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.379 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.379 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.379 13:59:07 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.379 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.379 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.379 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.379 13:59:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.379 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.379 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.379 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.379 13:59:07 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.379 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.379 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.379 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.379 13:59:07 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.379 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.379 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.379 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.379 13:59:07 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.379 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.379 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.379 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.379 13:59:07 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.379 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.379 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.379 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.379 13:59:07 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.379 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.379 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.379 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.379 13:59:07 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.379 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.379 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.379 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.379 13:59:07 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.379 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.379 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.379 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.379 13:59:07 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.379 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.379 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.379 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.379 13:59:07 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.379 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.379 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.379 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.379 13:59:07 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.379 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.379 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.379 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.379 13:59:07 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.379 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.379 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.379 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.379 13:59:07 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.379 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.379 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.379 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.379 13:59:07 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.379 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.379 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.379 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.379 13:59:07 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.379 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.379 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.379 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.380 13:59:07 -- setup/common.sh@33 -- # echo 0 00:04:04.380 13:59:07 -- setup/common.sh@33 -- # return 0 00:04:04.380 13:59:07 -- setup/hugepages.sh@97 -- # anon=0 00:04:04.380 13:59:07 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:04.380 13:59:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.380 13:59:07 -- setup/common.sh@18 -- # local node= 00:04:04.380 13:59:07 -- setup/common.sh@19 -- # local var val 00:04:04.380 13:59:07 -- setup/common.sh@20 -- # local mem_f mem 00:04:04.380 13:59:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.380 13:59:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.380 13:59:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.380 13:59:07 -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.380 13:59:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.380 13:59:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7924072 kB' 'MemAvailable: 9480656 kB' 'Buffers: 3444 kB' 'Cached: 1768896 kB' 'SwapCached: 0 kB' 'Active: 467292 kB' 'Inactive: 1422624 kB' 'Active(anon): 128068 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1422624 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119192 kB' 'Mapped: 50912 kB' 'Shmem: 10492 kB' 'KReclaimable: 63324 kB' 'Slab: 161688 kB' 'SReclaimable: 63324 kB' 'SUnreclaim: 98364 kB' 'KernelStack: 6356 kB' 'PageTables: 3816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 306216 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55416 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 218988 kB' 'DirectMap2M: 6072320 kB' 'DirectMap1G: 8388608 kB' 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.380 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.380 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.381 13:59:07 -- setup/common.sh@33 -- # echo 0 00:04:04.381 13:59:07 -- setup/common.sh@33 -- # return 0 00:04:04.381 13:59:07 -- setup/hugepages.sh@99 -- # surp=0 00:04:04.381 13:59:07 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:04.381 13:59:07 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:04.381 13:59:07 -- setup/common.sh@18 -- # local node= 00:04:04.381 13:59:07 -- setup/common.sh@19 -- # local var val 00:04:04.381 13:59:07 -- setup/common.sh@20 -- # local mem_f mem 00:04:04.381 13:59:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.381 13:59:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.381 13:59:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.381 13:59:07 -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.381 13:59:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.381 13:59:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7924072 kB' 'MemAvailable: 9480656 kB' 'Buffers: 3444 kB' 'Cached: 1768896 kB' 'SwapCached: 0 kB' 'Active: 467104 kB' 'Inactive: 1422624 kB' 'Active(anon): 127880 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1422624 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 118928 kB' 'Mapped: 50688 kB' 'Shmem: 10492 kB' 'KReclaimable: 63324 kB' 'Slab: 161680 kB' 'SReclaimable: 63324 kB' 'SUnreclaim: 98356 kB' 'KernelStack: 6336 kB' 'PageTables: 3924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 306216 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 218988 kB' 'DirectMap2M: 6072320 kB' 'DirectMap1G: 8388608 kB' 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.381 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.381 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.382 13:59:07 -- setup/common.sh@33 -- # echo 0 00:04:04.382 13:59:07 -- setup/common.sh@33 -- # return 0 00:04:04.382 13:59:07 -- setup/hugepages.sh@100 -- # resv=0 00:04:04.382 nr_hugepages=1024 00:04:04.382 13:59:07 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:04.382 resv_hugepages=0 00:04:04.382 13:59:07 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:04.382 surplus_hugepages=0 00:04:04.382 13:59:07 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:04.382 anon_hugepages=0 00:04:04.382 13:59:07 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:04.382 13:59:07 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:04.382 13:59:07 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:04.382 13:59:07 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:04.382 13:59:07 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:04.382 13:59:07 -- setup/common.sh@18 -- # local node= 00:04:04.382 13:59:07 -- setup/common.sh@19 -- # local var val 00:04:04.382 13:59:07 -- setup/common.sh@20 -- # local mem_f mem 00:04:04.382 13:59:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.382 13:59:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.382 13:59:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.382 13:59:07 -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.382 13:59:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.382 13:59:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7924072 kB' 'MemAvailable: 9480656 kB' 'Buffers: 3444 kB' 'Cached: 1768896 kB' 'SwapCached: 0 kB' 'Active: 466772 kB' 'Inactive: 1422624 kB' 'Active(anon): 127548 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1422624 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 118628 kB' 'Mapped: 50688 kB' 'Shmem: 10492 kB' 'KReclaimable: 63324 kB' 'Slab: 161680 kB' 'SReclaimable: 63324 kB' 'SUnreclaim: 98356 kB' 'KernelStack: 6320 kB' 'PageTables: 3872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 306216 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 218988 kB' 'DirectMap2M: 6072320 kB' 'DirectMap1G: 8388608 kB' 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.382 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.382 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.383 13:59:07 -- setup/common.sh@33 -- # echo 1024 00:04:04.383 13:59:07 -- setup/common.sh@33 -- # return 0 00:04:04.383 13:59:07 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:04.383 13:59:07 -- setup/hugepages.sh@112 -- # get_nodes 00:04:04.383 13:59:07 -- setup/hugepages.sh@27 -- # local node 00:04:04.383 13:59:07 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:04.383 13:59:07 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:04.383 13:59:07 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:04.383 13:59:07 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:04.383 13:59:07 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:04.383 13:59:07 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:04.383 13:59:07 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:04.383 13:59:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.383 13:59:07 -- setup/common.sh@18 -- # local node=0 00:04:04.383 13:59:07 -- setup/common.sh@19 -- # local var val 00:04:04.383 13:59:07 -- setup/common.sh@20 -- # local mem_f mem 00:04:04.383 13:59:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.383 13:59:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:04.383 13:59:07 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:04.383 13:59:07 -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.383 13:59:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.383 13:59:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7924072 kB' 'MemUsed: 4313024 kB' 'SwapCached: 0 kB' 'Active: 467032 kB' 'Inactive: 1422624 kB' 'Active(anon): 127808 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1422624 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1772340 kB' 'Mapped: 50688 kB' 'AnonPages: 118888 kB' 'Shmem: 10492 kB' 'KernelStack: 6388 kB' 'PageTables: 4132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63324 kB' 'Slab: 161680 kB' 'SReclaimable: 63324 kB' 'SUnreclaim: 98356 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # continue 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.383 13:59:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.383 13:59:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.383 13:59:07 -- setup/common.sh@33 -- # echo 0 00:04:04.383 13:59:07 -- setup/common.sh@33 -- # return 0 00:04:04.383 13:59:07 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:04.383 13:59:07 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:04.383 13:59:07 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:04.383 13:59:07 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:04.383 node0=1024 expecting 1024 00:04:04.383 13:59:07 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:04.383 13:59:07 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:04.383 00:04:04.383 real 0m1.107s 00:04:04.383 user 0m0.470s 00:04:04.383 sys 0m0.691s 00:04:04.383 13:59:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:04.383 ************************************ 00:04:04.383 END TEST no_shrink_alloc 00:04:04.383 ************************************ 00:04:04.383 13:59:07 -- common/autotest_common.sh@10 -- # set +x 00:04:04.383 13:59:07 -- setup/hugepages.sh@217 -- # clear_hp 00:04:04.383 13:59:07 -- setup/hugepages.sh@37 -- # local node hp 00:04:04.383 13:59:07 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:04.383 13:59:07 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:04.383 13:59:07 -- setup/hugepages.sh@41 -- # echo 0 00:04:04.383 13:59:07 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:04.383 13:59:07 -- setup/hugepages.sh@41 -- # echo 0 00:04:04.383 13:59:07 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:04.383 13:59:07 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:04.383 00:04:04.383 real 0m4.981s 00:04:04.383 user 0m2.057s 00:04:04.383 sys 0m2.876s 00:04:04.383 13:59:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:04.383 ************************************ 00:04:04.383 END TEST hugepages 00:04:04.383 ************************************ 00:04:04.383 13:59:07 -- common/autotest_common.sh@10 -- # set +x 00:04:04.383 13:59:07 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:04.383 13:59:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:04.383 13:59:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:04.383 13:59:07 -- common/autotest_common.sh@10 -- # set +x 00:04:04.383 ************************************ 00:04:04.384 START TEST driver 00:04:04.384 ************************************ 00:04:04.384 13:59:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:04.384 * Looking for test storage... 00:04:04.643 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:04.643 13:59:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:04.643 13:59:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:04.643 13:59:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:04.643 13:59:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:04.643 13:59:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:04.643 13:59:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:04.643 13:59:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:04.643 13:59:07 -- scripts/common.sh@335 -- # IFS=.-: 00:04:04.643 13:59:07 -- scripts/common.sh@335 -- # read -ra ver1 00:04:04.643 13:59:07 -- scripts/common.sh@336 -- # IFS=.-: 00:04:04.643 13:59:07 -- scripts/common.sh@336 -- # read -ra ver2 00:04:04.643 13:59:07 -- scripts/common.sh@337 -- # local 'op=<' 00:04:04.643 13:59:07 -- scripts/common.sh@339 -- # ver1_l=2 00:04:04.643 13:59:07 -- scripts/common.sh@340 -- # ver2_l=1 00:04:04.643 13:59:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:04.643 13:59:07 -- scripts/common.sh@343 -- # case "$op" in 00:04:04.643 13:59:07 -- scripts/common.sh@344 -- # : 1 00:04:04.643 13:59:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:04.643 13:59:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:04.643 13:59:07 -- scripts/common.sh@364 -- # decimal 1 00:04:04.643 13:59:07 -- scripts/common.sh@352 -- # local d=1 00:04:04.643 13:59:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:04.643 13:59:07 -- scripts/common.sh@354 -- # echo 1 00:04:04.643 13:59:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:04.643 13:59:07 -- scripts/common.sh@365 -- # decimal 2 00:04:04.643 13:59:07 -- scripts/common.sh@352 -- # local d=2 00:04:04.643 13:59:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:04.643 13:59:07 -- scripts/common.sh@354 -- # echo 2 00:04:04.643 13:59:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:04.643 13:59:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:04.643 13:59:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:04.643 13:59:07 -- scripts/common.sh@367 -- # return 0 00:04:04.643 13:59:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:04.643 13:59:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:04.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.643 --rc genhtml_branch_coverage=1 00:04:04.643 --rc genhtml_function_coverage=1 00:04:04.643 --rc genhtml_legend=1 00:04:04.643 --rc geninfo_all_blocks=1 00:04:04.643 --rc geninfo_unexecuted_blocks=1 00:04:04.643 00:04:04.643 ' 00:04:04.643 13:59:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:04.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.643 --rc genhtml_branch_coverage=1 00:04:04.643 --rc genhtml_function_coverage=1 00:04:04.643 --rc genhtml_legend=1 00:04:04.643 --rc geninfo_all_blocks=1 00:04:04.643 --rc geninfo_unexecuted_blocks=1 00:04:04.643 00:04:04.643 ' 00:04:04.643 13:59:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:04.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.643 --rc genhtml_branch_coverage=1 00:04:04.643 --rc genhtml_function_coverage=1 00:04:04.643 --rc genhtml_legend=1 00:04:04.643 --rc geninfo_all_blocks=1 00:04:04.643 --rc geninfo_unexecuted_blocks=1 00:04:04.643 00:04:04.643 ' 00:04:04.644 13:59:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:04.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.644 --rc genhtml_branch_coverage=1 00:04:04.644 --rc genhtml_function_coverage=1 00:04:04.644 --rc genhtml_legend=1 00:04:04.644 --rc geninfo_all_blocks=1 00:04:04.644 --rc geninfo_unexecuted_blocks=1 00:04:04.644 00:04:04.644 ' 00:04:04.644 13:59:07 -- setup/driver.sh@68 -- # setup reset 00:04:04.644 13:59:07 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:04.644 13:59:07 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:11.231 13:59:13 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:11.231 13:59:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:11.231 13:59:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:11.231 13:59:13 -- common/autotest_common.sh@10 -- # set +x 00:04:11.231 ************************************ 00:04:11.231 START TEST guess_driver 00:04:11.231 ************************************ 00:04:11.231 13:59:13 -- common/autotest_common.sh@1114 -- # guess_driver 00:04:11.231 13:59:13 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:11.231 13:59:13 -- setup/driver.sh@47 -- # local fail=0 00:04:11.231 13:59:13 -- setup/driver.sh@49 -- # pick_driver 00:04:11.231 13:59:13 -- setup/driver.sh@36 -- # vfio 00:04:11.231 13:59:13 -- setup/driver.sh@21 -- # local iommu_grups 00:04:11.231 13:59:13 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:11.231 13:59:13 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:11.231 13:59:13 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:11.231 13:59:13 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:11.231 13:59:13 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:11.231 13:59:13 -- setup/driver.sh@32 -- # return 1 00:04:11.231 13:59:13 -- setup/driver.sh@38 -- # uio 00:04:11.231 13:59:13 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:11.231 13:59:13 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:11.231 13:59:13 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:11.231 13:59:13 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:11.231 13:59:13 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:11.231 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:11.231 13:59:13 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:11.232 13:59:13 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:11.232 13:59:13 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:11.232 Looking for driver=uio_pci_generic 00:04:11.232 13:59:13 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:11.232 13:59:13 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.232 13:59:13 -- setup/driver.sh@45 -- # setup output config 00:04:11.232 13:59:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:11.232 13:59:13 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:11.232 13:59:14 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:11.232 13:59:14 -- setup/driver.sh@58 -- # continue 00:04:11.232 13:59:14 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.492 13:59:14 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.492 13:59:14 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:11.492 13:59:14 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.492 13:59:14 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.492 13:59:14 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:11.492 13:59:14 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.492 13:59:14 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.492 13:59:14 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:11.492 13:59:14 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.492 13:59:14 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.492 13:59:14 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:11.492 13:59:14 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.492 13:59:14 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:11.492 13:59:14 -- setup/driver.sh@65 -- # setup reset 00:04:11.492 13:59:14 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:11.492 13:59:14 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:18.075 00:04:18.075 real 0m6.967s 00:04:18.075 user 0m0.649s 00:04:18.075 sys 0m1.215s 00:04:18.075 13:59:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:18.075 ************************************ 00:04:18.075 END TEST guess_driver 00:04:18.075 ************************************ 00:04:18.075 13:59:20 -- common/autotest_common.sh@10 -- # set +x 00:04:18.075 00:04:18.075 real 0m12.969s 00:04:18.075 user 0m1.013s 00:04:18.075 sys 0m1.905s 00:04:18.075 13:59:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:18.075 ************************************ 00:04:18.075 END TEST driver 00:04:18.076 ************************************ 00:04:18.076 13:59:20 -- common/autotest_common.sh@10 -- # set +x 00:04:18.076 13:59:20 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:18.076 13:59:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:18.076 13:59:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:18.076 13:59:20 -- common/autotest_common.sh@10 -- # set +x 00:04:18.076 ************************************ 00:04:18.076 START TEST devices 00:04:18.076 ************************************ 00:04:18.076 13:59:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:18.076 * Looking for test storage... 00:04:18.076 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:18.076 13:59:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:18.076 13:59:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:18.076 13:59:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:18.076 13:59:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:18.076 13:59:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:18.076 13:59:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:18.076 13:59:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:18.076 13:59:20 -- scripts/common.sh@335 -- # IFS=.-: 00:04:18.076 13:59:20 -- scripts/common.sh@335 -- # read -ra ver1 00:04:18.076 13:59:20 -- scripts/common.sh@336 -- # IFS=.-: 00:04:18.076 13:59:20 -- scripts/common.sh@336 -- # read -ra ver2 00:04:18.076 13:59:20 -- scripts/common.sh@337 -- # local 'op=<' 00:04:18.076 13:59:20 -- scripts/common.sh@339 -- # ver1_l=2 00:04:18.076 13:59:20 -- scripts/common.sh@340 -- # ver2_l=1 00:04:18.076 13:59:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:18.076 13:59:20 -- scripts/common.sh@343 -- # case "$op" in 00:04:18.076 13:59:20 -- scripts/common.sh@344 -- # : 1 00:04:18.076 13:59:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:18.076 13:59:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:18.076 13:59:20 -- scripts/common.sh@364 -- # decimal 1 00:04:18.076 13:59:20 -- scripts/common.sh@352 -- # local d=1 00:04:18.076 13:59:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:18.076 13:59:20 -- scripts/common.sh@354 -- # echo 1 00:04:18.076 13:59:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:18.076 13:59:20 -- scripts/common.sh@365 -- # decimal 2 00:04:18.076 13:59:20 -- scripts/common.sh@352 -- # local d=2 00:04:18.076 13:59:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:18.076 13:59:20 -- scripts/common.sh@354 -- # echo 2 00:04:18.076 13:59:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:18.076 13:59:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:18.076 13:59:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:18.076 13:59:20 -- scripts/common.sh@367 -- # return 0 00:04:18.076 13:59:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:18.076 13:59:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:18.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.076 --rc genhtml_branch_coverage=1 00:04:18.076 --rc genhtml_function_coverage=1 00:04:18.076 --rc genhtml_legend=1 00:04:18.076 --rc geninfo_all_blocks=1 00:04:18.076 --rc geninfo_unexecuted_blocks=1 00:04:18.076 00:04:18.076 ' 00:04:18.076 13:59:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:18.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.076 --rc genhtml_branch_coverage=1 00:04:18.076 --rc genhtml_function_coverage=1 00:04:18.076 --rc genhtml_legend=1 00:04:18.076 --rc geninfo_all_blocks=1 00:04:18.076 --rc geninfo_unexecuted_blocks=1 00:04:18.076 00:04:18.076 ' 00:04:18.076 13:59:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:18.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.076 --rc genhtml_branch_coverage=1 00:04:18.076 --rc genhtml_function_coverage=1 00:04:18.076 --rc genhtml_legend=1 00:04:18.076 --rc geninfo_all_blocks=1 00:04:18.076 --rc geninfo_unexecuted_blocks=1 00:04:18.076 00:04:18.076 ' 00:04:18.076 13:59:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:18.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.076 --rc genhtml_branch_coverage=1 00:04:18.076 --rc genhtml_function_coverage=1 00:04:18.076 --rc genhtml_legend=1 00:04:18.076 --rc geninfo_all_blocks=1 00:04:18.076 --rc geninfo_unexecuted_blocks=1 00:04:18.076 00:04:18.076 ' 00:04:18.076 13:59:20 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:18.076 13:59:20 -- setup/devices.sh@192 -- # setup reset 00:04:18.076 13:59:20 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:18.076 13:59:20 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:18.649 13:59:21 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:18.649 13:59:21 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:18.649 13:59:21 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:18.649 13:59:21 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:18.649 13:59:21 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:18.649 13:59:21 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0c0n1 00:04:18.649 13:59:21 -- common/autotest_common.sh@1657 -- # local device=nvme0c0n1 00:04:18.649 13:59:21 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:04:18.649 13:59:21 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:18.649 13:59:21 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:18.649 13:59:21 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:18.649 13:59:21 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:18.649 13:59:21 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:18.649 13:59:21 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:18.649 13:59:21 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:18.649 13:59:21 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:04:18.649 13:59:21 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:04:18.649 13:59:21 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:18.649 13:59:21 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:18.649 13:59:21 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:18.649 13:59:21 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:04:18.649 13:59:21 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:04:18.649 13:59:21 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:18.649 13:59:21 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:18.649 13:59:21 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:18.649 13:59:21 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:04:18.649 13:59:21 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:04:18.649 13:59:21 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:18.649 13:59:21 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:18.649 13:59:21 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:18.649 13:59:21 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:04:18.649 13:59:21 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:04:18.649 13:59:21 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:18.649 13:59:21 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:18.649 13:59:21 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:18.649 13:59:21 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:04:18.649 13:59:21 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:04:18.649 13:59:21 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:18.649 13:59:21 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:18.649 13:59:21 -- setup/devices.sh@196 -- # blocks=() 00:04:18.649 13:59:21 -- setup/devices.sh@196 -- # declare -a blocks 00:04:18.649 13:59:21 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:18.649 13:59:21 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:18.649 13:59:21 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:18.649 13:59:21 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:18.649 13:59:21 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:18.649 13:59:21 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:18.649 13:59:21 -- setup/devices.sh@202 -- # pci=0000:00:09.0 00:04:18.649 13:59:21 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\9\.\0* ]] 00:04:18.649 13:59:21 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:18.649 13:59:21 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:18.649 13:59:21 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:18.649 No valid GPT data, bailing 00:04:18.649 13:59:21 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:18.649 13:59:21 -- scripts/common.sh@393 -- # pt= 00:04:18.649 13:59:21 -- scripts/common.sh@394 -- # return 1 00:04:18.649 13:59:21 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:18.649 13:59:21 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:18.649 13:59:21 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:18.649 13:59:21 -- setup/common.sh@80 -- # echo 1073741824 00:04:18.649 13:59:21 -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:04:18.649 13:59:21 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:18.649 13:59:21 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:18.649 13:59:21 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:18.649 13:59:21 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:04:18.649 13:59:21 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:04:18.649 13:59:21 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:18.649 13:59:21 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:04:18.649 13:59:21 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:18.649 No valid GPT data, bailing 00:04:18.649 13:59:21 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:18.649 13:59:21 -- scripts/common.sh@393 -- # pt= 00:04:18.649 13:59:21 -- scripts/common.sh@394 -- # return 1 00:04:18.649 13:59:21 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:18.649 13:59:21 -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:18.649 13:59:21 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:18.649 13:59:21 -- setup/common.sh@80 -- # echo 4294967296 00:04:18.649 13:59:21 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:18.649 13:59:21 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:18.649 13:59:21 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:04:18.649 13:59:21 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:18.649 13:59:21 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:04:18.649 13:59:21 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:18.649 13:59:21 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:04:18.649 13:59:21 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:04:18.649 13:59:21 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:04:18.649 13:59:21 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:04:18.649 13:59:21 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:04:18.649 No valid GPT data, bailing 00:04:18.911 13:59:21 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:18.911 13:59:21 -- scripts/common.sh@393 -- # pt= 00:04:18.911 13:59:21 -- scripts/common.sh@394 -- # return 1 00:04:18.911 13:59:21 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:04:18.911 13:59:21 -- setup/common.sh@76 -- # local dev=nvme1n2 00:04:18.911 13:59:21 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:04:18.911 13:59:21 -- setup/common.sh@80 -- # echo 4294967296 00:04:18.911 13:59:21 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:18.911 13:59:21 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:18.911 13:59:21 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:04:18.911 13:59:21 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:18.911 13:59:21 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:04:18.911 13:59:21 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:18.911 13:59:21 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:04:18.911 13:59:21 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:04:18.911 13:59:21 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:04:18.911 13:59:21 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:04:18.911 13:59:21 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:04:18.911 No valid GPT data, bailing 00:04:18.911 13:59:21 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:18.911 13:59:21 -- scripts/common.sh@393 -- # pt= 00:04:18.911 13:59:21 -- scripts/common.sh@394 -- # return 1 00:04:18.911 13:59:21 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:04:18.912 13:59:21 -- setup/common.sh@76 -- # local dev=nvme1n3 00:04:18.912 13:59:21 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:04:18.912 13:59:21 -- setup/common.sh@80 -- # echo 4294967296 00:04:18.912 13:59:21 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:18.912 13:59:21 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:18.912 13:59:21 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:04:18.912 13:59:21 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:18.912 13:59:21 -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:04:18.912 13:59:21 -- setup/devices.sh@201 -- # ctrl=nvme2 00:04:18.912 13:59:21 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:04:18.912 13:59:21 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:18.912 13:59:21 -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:04:18.912 13:59:21 -- scripts/common.sh@380 -- # local block=nvme2n1 pt 00:04:18.912 13:59:21 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:04:18.912 No valid GPT data, bailing 00:04:18.912 13:59:21 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:18.912 13:59:21 -- scripts/common.sh@393 -- # pt= 00:04:18.912 13:59:21 -- scripts/common.sh@394 -- # return 1 00:04:18.912 13:59:21 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:04:18.912 13:59:21 -- setup/common.sh@76 -- # local dev=nvme2n1 00:04:18.912 13:59:21 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:04:18.912 13:59:21 -- setup/common.sh@80 -- # echo 6343335936 00:04:18.912 13:59:21 -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:04:18.912 13:59:21 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:18.912 13:59:21 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:04:18.912 13:59:21 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:18.912 13:59:21 -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:04:18.912 13:59:21 -- setup/devices.sh@201 -- # ctrl=nvme3 00:04:18.912 13:59:21 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:04:18.912 13:59:21 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:18.912 13:59:21 -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:04:18.912 13:59:21 -- scripts/common.sh@380 -- # local block=nvme3n1 pt 00:04:18.912 13:59:21 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:04:18.912 No valid GPT data, bailing 00:04:18.912 13:59:21 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:19.173 13:59:21 -- scripts/common.sh@393 -- # pt= 00:04:19.173 13:59:21 -- scripts/common.sh@394 -- # return 1 00:04:19.173 13:59:21 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:04:19.173 13:59:21 -- setup/common.sh@76 -- # local dev=nvme3n1 00:04:19.173 13:59:21 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:04:19.173 13:59:21 -- setup/common.sh@80 -- # echo 5368709120 00:04:19.173 13:59:21 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:19.173 13:59:21 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:19.173 13:59:21 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:04:19.173 13:59:21 -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:04:19.173 13:59:21 -- setup/devices.sh@211 -- # declare -r test_disk=nvme1n1 00:04:19.173 13:59:21 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:19.173 13:59:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:19.174 13:59:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:19.174 13:59:21 -- common/autotest_common.sh@10 -- # set +x 00:04:19.174 ************************************ 00:04:19.174 START TEST nvme_mount 00:04:19.174 ************************************ 00:04:19.174 13:59:21 -- common/autotest_common.sh@1114 -- # nvme_mount 00:04:19.174 13:59:21 -- setup/devices.sh@95 -- # nvme_disk=nvme1n1 00:04:19.174 13:59:21 -- setup/devices.sh@96 -- # nvme_disk_p=nvme1n1p1 00:04:19.174 13:59:21 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:19.174 13:59:21 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:19.174 13:59:21 -- setup/devices.sh@101 -- # partition_drive nvme1n1 1 00:04:19.174 13:59:21 -- setup/common.sh@39 -- # local disk=nvme1n1 00:04:19.174 13:59:21 -- setup/common.sh@40 -- # local part_no=1 00:04:19.174 13:59:21 -- setup/common.sh@41 -- # local size=1073741824 00:04:19.174 13:59:21 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:19.174 13:59:21 -- setup/common.sh@44 -- # parts=() 00:04:19.174 13:59:21 -- setup/common.sh@44 -- # local parts 00:04:19.174 13:59:21 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:19.174 13:59:21 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:19.174 13:59:21 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:19.174 13:59:21 -- setup/common.sh@46 -- # (( part++ )) 00:04:19.174 13:59:21 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:19.174 13:59:21 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:19.174 13:59:21 -- setup/common.sh@56 -- # sgdisk /dev/nvme1n1 --zap-all 00:04:19.174 13:59:21 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme1n1p1 00:04:20.115 Creating new GPT entries in memory. 00:04:20.115 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:20.115 other utilities. 00:04:20.115 13:59:22 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:20.115 13:59:22 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:20.115 13:59:22 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:20.115 13:59:22 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:20.115 13:59:22 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=1:2048:264191 00:04:21.057 Creating new GPT entries in memory. 00:04:21.057 The operation has completed successfully. 00:04:21.057 13:59:23 -- setup/common.sh@57 -- # (( part++ )) 00:04:21.057 13:59:23 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:21.057 13:59:23 -- setup/common.sh@62 -- # wait 53685 00:04:21.057 13:59:23 -- setup/devices.sh@102 -- # mkfs /dev/nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:21.057 13:59:23 -- setup/common.sh@66 -- # local dev=/dev/nvme1n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:21.057 13:59:23 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:21.057 13:59:23 -- setup/common.sh@70 -- # [[ -e /dev/nvme1n1p1 ]] 00:04:21.057 13:59:23 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme1n1p1 00:04:21.317 13:59:23 -- setup/common.sh@72 -- # mount /dev/nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:21.317 13:59:24 -- setup/devices.sh@105 -- # verify 0000:00:08.0 nvme1n1:nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:21.317 13:59:24 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:21.317 13:59:24 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme1n1p1 00:04:21.317 13:59:24 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:21.317 13:59:24 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:21.317 13:59:24 -- setup/devices.sh@53 -- # local found=0 00:04:21.317 13:59:24 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:21.317 13:59:24 -- setup/devices.sh@56 -- # : 00:04:21.317 13:59:24 -- setup/devices.sh@59 -- # local pci status 00:04:21.317 13:59:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.317 13:59:24 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:21.317 13:59:24 -- setup/devices.sh@47 -- # setup output config 00:04:21.317 13:59:24 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:21.317 13:59:24 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:21.317 13:59:24 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:21.317 13:59:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.578 13:59:24 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:21.578 13:59:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.578 13:59:24 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:21.578 13:59:24 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme1n1:nvme1n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\1\n\1\p\1* ]] 00:04:21.578 13:59:24 -- setup/devices.sh@63 -- # found=1 00:04:21.578 13:59:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.578 13:59:24 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:21.578 13:59:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.839 13:59:24 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:21.839 13:59:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.839 13:59:24 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:21.839 13:59:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.839 13:59:24 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:21.839 13:59:24 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:21.839 13:59:24 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:21.839 13:59:24 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:21.839 13:59:24 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:22.099 13:59:24 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:22.099 13:59:24 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:22.099 13:59:24 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:22.099 13:59:24 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:22.099 13:59:24 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme1n1p1 00:04:22.099 /dev/nvme1n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:22.099 13:59:24 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:04:22.099 13:59:24 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:04:22.358 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:22.358 /dev/nvme1n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:22.359 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:22.359 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:04:22.359 13:59:25 -- setup/devices.sh@113 -- # mkfs /dev/nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:22.359 13:59:25 -- setup/common.sh@66 -- # local dev=/dev/nvme1n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:22.359 13:59:25 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:22.359 13:59:25 -- setup/common.sh@70 -- # [[ -e /dev/nvme1n1 ]] 00:04:22.359 13:59:25 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme1n1 1024M 00:04:22.359 13:59:25 -- setup/common.sh@72 -- # mount /dev/nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:22.359 13:59:25 -- setup/devices.sh@116 -- # verify 0000:00:08.0 nvme1n1:nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:22.359 13:59:25 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:22.359 13:59:25 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme1n1 00:04:22.359 13:59:25 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:22.359 13:59:25 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:22.359 13:59:25 -- setup/devices.sh@53 -- # local found=0 00:04:22.359 13:59:25 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:22.359 13:59:25 -- setup/devices.sh@56 -- # : 00:04:22.359 13:59:25 -- setup/devices.sh@59 -- # local pci status 00:04:22.359 13:59:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.359 13:59:25 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:22.359 13:59:25 -- setup/devices.sh@47 -- # setup output config 00:04:22.359 13:59:25 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:22.359 13:59:25 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:22.618 13:59:25 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:22.618 13:59:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.618 13:59:25 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:22.618 13:59:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.877 13:59:25 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:22.877 13:59:25 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme1n1:nvme1n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\1\n\1* ]] 00:04:22.877 13:59:25 -- setup/devices.sh@63 -- # found=1 00:04:22.877 13:59:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.877 13:59:25 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:22.877 13:59:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.877 13:59:25 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:22.877 13:59:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.142 13:59:25 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:23.142 13:59:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.142 13:59:25 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:23.142 13:59:25 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:23.142 13:59:25 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:23.142 13:59:25 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:23.142 13:59:25 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:23.142 13:59:25 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:23.142 13:59:25 -- setup/devices.sh@125 -- # verify 0000:00:08.0 data@nvme1n1 '' '' 00:04:23.142 13:59:25 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:23.142 13:59:25 -- setup/devices.sh@49 -- # local mounts=data@nvme1n1 00:04:23.142 13:59:25 -- setup/devices.sh@50 -- # local mount_point= 00:04:23.142 13:59:25 -- setup/devices.sh@51 -- # local test_file= 00:04:23.142 13:59:25 -- setup/devices.sh@53 -- # local found=0 00:04:23.142 13:59:25 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:23.142 13:59:25 -- setup/devices.sh@59 -- # local pci status 00:04:23.142 13:59:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.142 13:59:25 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:23.142 13:59:25 -- setup/devices.sh@47 -- # setup output config 00:04:23.142 13:59:25 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:23.142 13:59:25 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:23.142 13:59:25 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:23.142 13:59:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.424 13:59:26 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:23.424 13:59:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.699 13:59:26 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:23.699 13:59:26 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme1n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\1\n\1* ]] 00:04:23.699 13:59:26 -- setup/devices.sh@63 -- # found=1 00:04:23.699 13:59:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.699 13:59:26 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:23.699 13:59:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.699 13:59:26 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:23.699 13:59:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.699 13:59:26 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:23.699 13:59:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.960 13:59:26 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:23.960 13:59:26 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:23.960 13:59:26 -- setup/devices.sh@68 -- # return 0 00:04:23.960 13:59:26 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:23.960 13:59:26 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:23.960 13:59:26 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:23.960 13:59:26 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:04:23.960 13:59:26 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:04:23.960 /dev/nvme1n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:23.960 00:04:23.960 real 0m4.853s 00:04:23.960 user 0m0.903s 00:04:23.960 sys 0m1.226s 00:04:23.961 ************************************ 00:04:23.961 END TEST nvme_mount 00:04:23.961 ************************************ 00:04:23.961 13:59:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:23.961 13:59:26 -- common/autotest_common.sh@10 -- # set +x 00:04:23.961 13:59:26 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:23.961 13:59:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:23.961 13:59:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:23.961 13:59:26 -- common/autotest_common.sh@10 -- # set +x 00:04:23.961 ************************************ 00:04:23.961 START TEST dm_mount 00:04:23.961 ************************************ 00:04:23.961 13:59:26 -- common/autotest_common.sh@1114 -- # dm_mount 00:04:23.961 13:59:26 -- setup/devices.sh@144 -- # pv=nvme1n1 00:04:23.961 13:59:26 -- setup/devices.sh@145 -- # pv0=nvme1n1p1 00:04:23.961 13:59:26 -- setup/devices.sh@146 -- # pv1=nvme1n1p2 00:04:23.961 13:59:26 -- setup/devices.sh@148 -- # partition_drive nvme1n1 00:04:23.961 13:59:26 -- setup/common.sh@39 -- # local disk=nvme1n1 00:04:23.961 13:59:26 -- setup/common.sh@40 -- # local part_no=2 00:04:23.961 13:59:26 -- setup/common.sh@41 -- # local size=1073741824 00:04:23.961 13:59:26 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:23.961 13:59:26 -- setup/common.sh@44 -- # parts=() 00:04:23.961 13:59:26 -- setup/common.sh@44 -- # local parts 00:04:23.961 13:59:26 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:23.961 13:59:26 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:23.961 13:59:26 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:23.961 13:59:26 -- setup/common.sh@46 -- # (( part++ )) 00:04:23.961 13:59:26 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:23.961 13:59:26 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:23.961 13:59:26 -- setup/common.sh@46 -- # (( part++ )) 00:04:23.961 13:59:26 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:23.961 13:59:26 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:23.961 13:59:26 -- setup/common.sh@56 -- # sgdisk /dev/nvme1n1 --zap-all 00:04:23.961 13:59:26 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme1n1p1 nvme1n1p2 00:04:24.904 Creating new GPT entries in memory. 00:04:24.904 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:24.904 other utilities. 00:04:24.904 13:59:27 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:24.904 13:59:27 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:24.904 13:59:27 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:24.904 13:59:27 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:24.904 13:59:27 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=1:2048:264191 00:04:26.289 Creating new GPT entries in memory. 00:04:26.289 The operation has completed successfully. 00:04:26.289 13:59:28 -- setup/common.sh@57 -- # (( part++ )) 00:04:26.289 13:59:28 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:26.289 13:59:28 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:26.289 13:59:28 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:26.289 13:59:28 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=2:264192:526335 00:04:27.232 The operation has completed successfully. 00:04:27.232 13:59:29 -- setup/common.sh@57 -- # (( part++ )) 00:04:27.232 13:59:29 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:27.232 13:59:29 -- setup/common.sh@62 -- # wait 54309 00:04:27.232 13:59:29 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:27.232 13:59:29 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:27.232 13:59:29 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:27.232 13:59:29 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:27.232 13:59:29 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:27.232 13:59:29 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:27.232 13:59:29 -- setup/devices.sh@161 -- # break 00:04:27.232 13:59:29 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:27.232 13:59:29 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:27.232 13:59:29 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:27.232 13:59:29 -- setup/devices.sh@166 -- # dm=dm-0 00:04:27.232 13:59:29 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme1n1p1/holders/dm-0 ]] 00:04:27.232 13:59:29 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme1n1p2/holders/dm-0 ]] 00:04:27.232 13:59:29 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:27.232 13:59:29 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:27.232 13:59:29 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:27.232 13:59:29 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:27.232 13:59:29 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:27.232 13:59:29 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:27.232 13:59:30 -- setup/devices.sh@174 -- # verify 0000:00:08.0 nvme1n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:27.232 13:59:30 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:27.232 13:59:30 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme_dm_test 00:04:27.232 13:59:30 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:27.232 13:59:30 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:27.232 13:59:30 -- setup/devices.sh@53 -- # local found=0 00:04:27.232 13:59:30 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:27.232 13:59:30 -- setup/devices.sh@56 -- # : 00:04:27.232 13:59:30 -- setup/devices.sh@59 -- # local pci status 00:04:27.232 13:59:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.232 13:59:30 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:27.232 13:59:30 -- setup/devices.sh@47 -- # setup output config 00:04:27.232 13:59:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.232 13:59:30 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:27.232 13:59:30 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:27.232 13:59:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.493 13:59:30 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:27.493 13:59:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.753 13:59:30 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:27.753 13:59:30 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0,mount@nvme1n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:27.753 13:59:30 -- setup/devices.sh@63 -- # found=1 00:04:27.753 13:59:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.753 13:59:30 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:27.753 13:59:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.753 13:59:30 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:27.753 13:59:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.035 13:59:30 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:28.035 13:59:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.035 13:59:30 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:28.035 13:59:30 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:28.035 13:59:30 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:28.035 13:59:30 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:28.035 13:59:30 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:28.035 13:59:30 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:28.035 13:59:30 -- setup/devices.sh@184 -- # verify 0000:00:08.0 holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0 '' '' 00:04:28.035 13:59:30 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:28.035 13:59:30 -- setup/devices.sh@49 -- # local mounts=holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0 00:04:28.035 13:59:30 -- setup/devices.sh@50 -- # local mount_point= 00:04:28.035 13:59:30 -- setup/devices.sh@51 -- # local test_file= 00:04:28.035 13:59:30 -- setup/devices.sh@53 -- # local found=0 00:04:28.035 13:59:30 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:28.035 13:59:30 -- setup/devices.sh@59 -- # local pci status 00:04:28.035 13:59:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.035 13:59:30 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:28.035 13:59:30 -- setup/devices.sh@47 -- # setup output config 00:04:28.035 13:59:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:28.035 13:59:30 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:28.035 13:59:30 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:28.035 13:59:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.295 13:59:31 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:28.295 13:59:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.554 13:59:31 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:28.555 13:59:31 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\1\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\1\n\1\p\2\:\d\m\-\0* ]] 00:04:28.555 13:59:31 -- setup/devices.sh@63 -- # found=1 00:04:28.555 13:59:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.555 13:59:31 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:28.555 13:59:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.555 13:59:31 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:28.555 13:59:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.555 13:59:31 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:28.555 13:59:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.814 13:59:31 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:28.814 13:59:31 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:28.814 13:59:31 -- setup/devices.sh@68 -- # return 0 00:04:28.814 13:59:31 -- setup/devices.sh@187 -- # cleanup_dm 00:04:28.814 13:59:31 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:28.814 13:59:31 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:28.814 13:59:31 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:28.815 13:59:31 -- setup/devices.sh@39 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:28.815 13:59:31 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme1n1p1 00:04:28.815 /dev/nvme1n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:28.815 13:59:31 -- setup/devices.sh@42 -- # [[ -b /dev/nvme1n1p2 ]] 00:04:28.815 13:59:31 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme1n1p2 00:04:28.815 00:04:28.815 real 0m4.827s 00:04:28.815 user 0m0.594s 00:04:28.815 sys 0m0.890s 00:04:28.815 13:59:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:28.815 ************************************ 00:04:28.815 END TEST dm_mount 00:04:28.815 ************************************ 00:04:28.815 13:59:31 -- common/autotest_common.sh@10 -- # set +x 00:04:28.815 13:59:31 -- setup/devices.sh@1 -- # cleanup 00:04:28.815 13:59:31 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:28.815 13:59:31 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:28.815 13:59:31 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:28.815 13:59:31 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme1n1p1 00:04:28.815 13:59:31 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:04:28.815 13:59:31 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:04:29.075 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:29.075 /dev/nvme1n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:29.075 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:29.075 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:04:29.075 13:59:31 -- setup/devices.sh@12 -- # cleanup_dm 00:04:29.075 13:59:31 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:29.075 13:59:31 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:29.075 13:59:31 -- setup/devices.sh@39 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:29.075 13:59:31 -- setup/devices.sh@42 -- # [[ -b /dev/nvme1n1p2 ]] 00:04:29.075 13:59:31 -- setup/devices.sh@14 -- # [[ -b /dev/nvme1n1 ]] 00:04:29.075 13:59:31 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme1n1 00:04:29.075 ************************************ 00:04:29.075 END TEST devices 00:04:29.075 ************************************ 00:04:29.075 00:04:29.075 real 0m11.714s 00:04:29.075 user 0m2.289s 00:04:29.075 sys 0m2.761s 00:04:29.075 13:59:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:29.075 13:59:31 -- common/autotest_common.sh@10 -- # set +x 00:04:29.336 ************************************ 00:04:29.336 END TEST setup.sh 00:04:29.336 ************************************ 00:04:29.336 00:04:29.336 real 0m40.801s 00:04:29.336 user 0m7.702s 00:04:29.336 sys 0m10.741s 00:04:29.336 13:59:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:29.336 13:59:32 -- common/autotest_common.sh@10 -- # set +x 00:04:29.336 13:59:32 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:29.336 Hugepages 00:04:29.336 node hugesize free / total 00:04:29.336 node0 1048576kB 0 / 0 00:04:29.336 node0 2048kB 2048 / 2048 00:04:29.336 00:04:29.336 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:29.597 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:29.597 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:04:29.597 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:29.597 NVMe 0000:00:08.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:29.597 NVMe 0000:00:09.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:29.597 13:59:32 -- spdk/autotest.sh@128 -- # uname -s 00:04:29.597 13:59:32 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:04:29.597 13:59:32 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:04:29.597 13:59:32 -- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:30.538 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:30.798 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:30.798 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:30.798 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:04:30.798 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:04:30.798 13:59:33 -- common/autotest_common.sh@1527 -- # sleep 1 00:04:31.738 13:59:34 -- common/autotest_common.sh@1528 -- # bdfs=() 00:04:31.738 13:59:34 -- common/autotest_common.sh@1528 -- # local bdfs 00:04:31.738 13:59:34 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:04:31.738 13:59:34 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:04:31.738 13:59:34 -- common/autotest_common.sh@1508 -- # bdfs=() 00:04:31.738 13:59:34 -- common/autotest_common.sh@1508 -- # local bdfs 00:04:31.738 13:59:34 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:31.738 13:59:34 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:31.738 13:59:34 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:04:31.999 13:59:34 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:04:31.999 13:59:34 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:04:31.999 13:59:34 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:32.261 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:32.261 Waiting for block devices as requested 00:04:32.261 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:04:32.521 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:04:32.521 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:04:32.521 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:04:37.814 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:04:37.814 13:59:40 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:04:37.814 13:59:40 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:04:37.814 13:59:40 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:37.814 13:59:40 -- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme 00:04:37.814 13:59:40 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 00:04:37.814 13:59:40 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 ]] 00:04:37.814 13:59:40 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 00:04:37.814 13:59:40 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme2 00:04:37.814 13:59:40 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme2 00:04:37.814 13:59:40 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme2 ]] 00:04:37.814 13:59:40 -- common/autotest_common.sh@1540 -- # grep oacs 00:04:37.814 13:59:40 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:04:37.814 13:59:40 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:37.814 13:59:40 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:04:37.814 13:59:40 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:04:37.814 13:59:40 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:04:37.814 13:59:40 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:04:37.814 13:59:40 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:04:37.814 13:59:40 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme2 00:04:37.814 13:59:40 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:04:37.814 13:59:40 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:04:37.814 13:59:40 -- common/autotest_common.sh@1552 -- # continue 00:04:37.814 13:59:40 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:04:37.814 13:59:40 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:04:37.814 13:59:40 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:37.814 13:59:40 -- common/autotest_common.sh@1497 -- # grep 0000:00:07.0/nvme/nvme 00:04:37.814 13:59:40 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 00:04:37.814 13:59:40 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 ]] 00:04:37.814 13:59:40 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 00:04:37.814 13:59:40 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme3 00:04:37.814 13:59:40 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme3 00:04:37.814 13:59:40 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme3 ]] 00:04:37.814 13:59:40 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:04:37.814 13:59:40 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:37.814 13:59:40 -- common/autotest_common.sh@1540 -- # grep oacs 00:04:37.814 13:59:40 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:04:37.814 13:59:40 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:04:37.814 13:59:40 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:04:37.814 13:59:40 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:04:37.814 13:59:40 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme3 00:04:37.814 13:59:40 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:04:37.814 13:59:40 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:04:37.814 13:59:40 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:04:37.814 13:59:40 -- common/autotest_common.sh@1552 -- # continue 00:04:37.814 13:59:40 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:04:37.814 13:59:40 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:08.0 00:04:37.814 13:59:40 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:37.814 13:59:40 -- common/autotest_common.sh@1497 -- # grep 0000:00:08.0/nvme/nvme 00:04:37.814 13:59:40 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 00:04:37.814 13:59:40 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 ]] 00:04:37.814 13:59:40 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 00:04:37.814 13:59:40 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme1 00:04:37.814 13:59:40 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme1 00:04:37.814 13:59:40 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme1 ]] 00:04:37.814 13:59:40 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:37.814 13:59:40 -- common/autotest_common.sh@1540 -- # grep oacs 00:04:37.814 13:59:40 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:37.814 13:59:40 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:04:37.814 13:59:40 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:04:37.814 13:59:40 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:04:37.814 13:59:40 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme1 00:04:37.814 13:59:40 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:04:37.814 13:59:40 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:04:37.814 13:59:40 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:04:37.814 13:59:40 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:04:37.814 13:59:40 -- common/autotest_common.sh@1552 -- # continue 00:04:37.814 13:59:40 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:04:37.814 13:59:40 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:09.0 00:04:37.814 13:59:40 -- common/autotest_common.sh@1497 -- # grep 0000:00:09.0/nvme/nvme 00:04:37.814 13:59:40 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:37.814 13:59:40 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 00:04:37.814 13:59:40 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 ]] 00:04:37.814 13:59:40 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 00:04:37.814 13:59:40 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:04:37.814 13:59:40 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:04:37.814 13:59:40 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:04:37.814 13:59:40 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:37.814 13:59:40 -- common/autotest_common.sh@1540 -- # grep oacs 00:04:37.814 13:59:40 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:37.814 13:59:40 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:04:37.814 13:59:40 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:04:37.814 13:59:40 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:04:37.814 13:59:40 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:04:37.814 13:59:40 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:04:37.814 13:59:40 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:04:37.814 13:59:40 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:04:37.814 13:59:40 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:04:37.814 13:59:40 -- common/autotest_common.sh@1552 -- # continue 00:04:37.814 13:59:40 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:04:37.814 13:59:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:37.814 13:59:40 -- common/autotest_common.sh@10 -- # set +x 00:04:37.814 13:59:40 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:04:37.814 13:59:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:37.814 13:59:40 -- common/autotest_common.sh@10 -- # set +x 00:04:37.814 13:59:40 -- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:38.771 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:38.771 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:38.771 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:38.771 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:04:39.059 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:04:39.059 13:59:41 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:04:39.059 13:59:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:39.059 13:59:41 -- common/autotest_common.sh@10 -- # set +x 00:04:39.059 13:59:41 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:04:39.059 13:59:41 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:04:39.059 13:59:41 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:04:39.059 13:59:41 -- common/autotest_common.sh@1572 -- # bdfs=() 00:04:39.059 13:59:41 -- common/autotest_common.sh@1572 -- # local bdfs 00:04:39.059 13:59:41 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:04:39.059 13:59:41 -- common/autotest_common.sh@1508 -- # bdfs=() 00:04:39.059 13:59:41 -- common/autotest_common.sh@1508 -- # local bdfs 00:04:39.059 13:59:41 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:39.059 13:59:41 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:39.059 13:59:41 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:04:39.059 13:59:41 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:04:39.059 13:59:41 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:04:39.059 13:59:41 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:04:39.059 13:59:41 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:04:39.059 13:59:41 -- common/autotest_common.sh@1575 -- # device=0x0010 00:04:39.059 13:59:41 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:39.059 13:59:41 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:04:39.059 13:59:41 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:04:39.059 13:59:41 -- common/autotest_common.sh@1575 -- # device=0x0010 00:04:39.059 13:59:41 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:39.059 13:59:41 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:04:39.059 13:59:41 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:08.0/device 00:04:39.059 13:59:41 -- common/autotest_common.sh@1575 -- # device=0x0010 00:04:39.059 13:59:41 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:39.059 13:59:41 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:04:39.059 13:59:41 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:09.0/device 00:04:39.059 13:59:41 -- common/autotest_common.sh@1575 -- # device=0x0010 00:04:39.059 13:59:41 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:39.059 13:59:41 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:04:39.059 13:59:41 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:04:39.059 13:59:41 -- common/autotest_common.sh@1588 -- # return 0 00:04:39.059 13:59:41 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:04:39.059 13:59:41 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:04:39.059 13:59:41 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:04:39.059 13:59:41 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:04:39.059 13:59:41 -- spdk/autotest.sh@160 -- # timing_enter lib 00:04:39.059 13:59:41 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:39.059 13:59:41 -- common/autotest_common.sh@10 -- # set +x 00:04:39.059 13:59:41 -- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:39.059 13:59:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:39.059 13:59:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:39.059 13:59:41 -- common/autotest_common.sh@10 -- # set +x 00:04:39.059 ************************************ 00:04:39.059 START TEST env 00:04:39.059 ************************************ 00:04:39.059 13:59:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:39.059 * Looking for test storage... 00:04:39.059 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:39.059 13:59:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:39.059 13:59:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:39.059 13:59:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:39.320 13:59:42 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:39.320 13:59:42 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:39.320 13:59:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:39.320 13:59:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:39.320 13:59:42 -- scripts/common.sh@335 -- # IFS=.-: 00:04:39.320 13:59:42 -- scripts/common.sh@335 -- # read -ra ver1 00:04:39.320 13:59:42 -- scripts/common.sh@336 -- # IFS=.-: 00:04:39.320 13:59:42 -- scripts/common.sh@336 -- # read -ra ver2 00:04:39.320 13:59:42 -- scripts/common.sh@337 -- # local 'op=<' 00:04:39.320 13:59:42 -- scripts/common.sh@339 -- # ver1_l=2 00:04:39.320 13:59:42 -- scripts/common.sh@340 -- # ver2_l=1 00:04:39.320 13:59:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:39.320 13:59:42 -- scripts/common.sh@343 -- # case "$op" in 00:04:39.321 13:59:42 -- scripts/common.sh@344 -- # : 1 00:04:39.321 13:59:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:39.321 13:59:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:39.321 13:59:42 -- scripts/common.sh@364 -- # decimal 1 00:04:39.321 13:59:42 -- scripts/common.sh@352 -- # local d=1 00:04:39.321 13:59:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:39.321 13:59:42 -- scripts/common.sh@354 -- # echo 1 00:04:39.321 13:59:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:39.321 13:59:42 -- scripts/common.sh@365 -- # decimal 2 00:04:39.321 13:59:42 -- scripts/common.sh@352 -- # local d=2 00:04:39.321 13:59:42 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:39.321 13:59:42 -- scripts/common.sh@354 -- # echo 2 00:04:39.321 13:59:42 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:39.321 13:59:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:39.321 13:59:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:39.321 13:59:42 -- scripts/common.sh@367 -- # return 0 00:04:39.321 13:59:42 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:39.321 13:59:42 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:39.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.321 --rc genhtml_branch_coverage=1 00:04:39.321 --rc genhtml_function_coverage=1 00:04:39.321 --rc genhtml_legend=1 00:04:39.321 --rc geninfo_all_blocks=1 00:04:39.321 --rc geninfo_unexecuted_blocks=1 00:04:39.321 00:04:39.321 ' 00:04:39.321 13:59:42 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:39.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.321 --rc genhtml_branch_coverage=1 00:04:39.321 --rc genhtml_function_coverage=1 00:04:39.321 --rc genhtml_legend=1 00:04:39.321 --rc geninfo_all_blocks=1 00:04:39.321 --rc geninfo_unexecuted_blocks=1 00:04:39.321 00:04:39.321 ' 00:04:39.321 13:59:42 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:39.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.321 --rc genhtml_branch_coverage=1 00:04:39.321 --rc genhtml_function_coverage=1 00:04:39.321 --rc genhtml_legend=1 00:04:39.321 --rc geninfo_all_blocks=1 00:04:39.321 --rc geninfo_unexecuted_blocks=1 00:04:39.321 00:04:39.321 ' 00:04:39.321 13:59:42 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:39.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.321 --rc genhtml_branch_coverage=1 00:04:39.321 --rc genhtml_function_coverage=1 00:04:39.321 --rc genhtml_legend=1 00:04:39.321 --rc geninfo_all_blocks=1 00:04:39.321 --rc geninfo_unexecuted_blocks=1 00:04:39.321 00:04:39.321 ' 00:04:39.321 13:59:42 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:39.321 13:59:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:39.321 13:59:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:39.321 13:59:42 -- common/autotest_common.sh@10 -- # set +x 00:04:39.321 ************************************ 00:04:39.321 START TEST env_memory 00:04:39.321 ************************************ 00:04:39.321 13:59:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:39.321 00:04:39.321 00:04:39.321 CUnit - A unit testing framework for C - Version 2.1-3 00:04:39.321 http://cunit.sourceforge.net/ 00:04:39.321 00:04:39.321 00:04:39.321 Suite: memory 00:04:39.321 Test: alloc and free memory map ...[2024-12-08 13:59:42.101941] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:39.321 passed 00:04:39.321 Test: mem map translation ...[2024-12-08 13:59:42.141204] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:39.321 [2024-12-08 13:59:42.141386] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:39.321 [2024-12-08 13:59:42.141496] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:39.321 [2024-12-08 13:59:42.141628] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:39.321 passed 00:04:39.321 Test: mem map registration ...[2024-12-08 13:59:42.210209] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:39.321 [2024-12-08 13:59:42.210378] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:39.321 passed 00:04:39.582 Test: mem map adjacent registrations ...passed 00:04:39.582 00:04:39.582 Run Summary: Type Total Ran Passed Failed Inactive 00:04:39.582 suites 1 1 n/a 0 0 00:04:39.582 tests 4 4 4 0 0 00:04:39.582 asserts 152 152 152 0 n/a 00:04:39.582 00:04:39.582 Elapsed time = 0.233 seconds 00:04:39.582 00:04:39.582 ************************************ 00:04:39.582 END TEST env_memory 00:04:39.582 ************************************ 00:04:39.582 real 0m0.271s 00:04:39.582 user 0m0.238s 00:04:39.582 sys 0m0.022s 00:04:39.582 13:59:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:39.582 13:59:42 -- common/autotest_common.sh@10 -- # set +x 00:04:39.582 13:59:42 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:39.582 13:59:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:39.582 13:59:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:39.582 13:59:42 -- common/autotest_common.sh@10 -- # set +x 00:04:39.582 ************************************ 00:04:39.582 START TEST env_vtophys 00:04:39.582 ************************************ 00:04:39.582 13:59:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:39.582 EAL: lib.eal log level changed from notice to debug 00:04:39.582 EAL: Detected lcore 0 as core 0 on socket 0 00:04:39.582 EAL: Detected lcore 1 as core 0 on socket 0 00:04:39.582 EAL: Detected lcore 2 as core 0 on socket 0 00:04:39.582 EAL: Detected lcore 3 as core 0 on socket 0 00:04:39.582 EAL: Detected lcore 4 as core 0 on socket 0 00:04:39.582 EAL: Detected lcore 5 as core 0 on socket 0 00:04:39.582 EAL: Detected lcore 6 as core 0 on socket 0 00:04:39.582 EAL: Detected lcore 7 as core 0 on socket 0 00:04:39.582 EAL: Detected lcore 8 as core 0 on socket 0 00:04:39.582 EAL: Detected lcore 9 as core 0 on socket 0 00:04:39.582 EAL: Maximum logical cores by configuration: 128 00:04:39.582 EAL: Detected CPU lcores: 10 00:04:39.582 EAL: Detected NUMA nodes: 1 00:04:39.582 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:39.582 EAL: Detected shared linkage of DPDK 00:04:39.582 EAL: No shared files mode enabled, IPC will be disabled 00:04:39.582 EAL: Selected IOVA mode 'PA' 00:04:39.582 EAL: Probing VFIO support... 00:04:39.582 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:39.582 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:39.582 EAL: Ask a virtual area of 0x2e000 bytes 00:04:39.582 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:39.582 EAL: Setting up physically contiguous memory... 00:04:39.582 EAL: Setting maximum number of open files to 524288 00:04:39.582 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:39.582 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:39.582 EAL: Ask a virtual area of 0x61000 bytes 00:04:39.582 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:39.582 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:39.582 EAL: Ask a virtual area of 0x400000000 bytes 00:04:39.582 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:39.582 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:39.582 EAL: Ask a virtual area of 0x61000 bytes 00:04:39.582 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:39.582 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:39.582 EAL: Ask a virtual area of 0x400000000 bytes 00:04:39.582 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:39.582 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:39.582 EAL: Ask a virtual area of 0x61000 bytes 00:04:39.582 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:39.582 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:39.582 EAL: Ask a virtual area of 0x400000000 bytes 00:04:39.582 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:39.582 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:39.582 EAL: Ask a virtual area of 0x61000 bytes 00:04:39.582 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:39.582 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:39.582 EAL: Ask a virtual area of 0x400000000 bytes 00:04:39.582 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:39.582 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:39.582 EAL: Hugepages will be freed exactly as allocated. 00:04:39.582 EAL: No shared files mode enabled, IPC is disabled 00:04:39.582 EAL: No shared files mode enabled, IPC is disabled 00:04:39.843 EAL: TSC frequency is ~2600000 KHz 00:04:39.843 EAL: Main lcore 0 is ready (tid=7f4db6617a40;cpuset=[0]) 00:04:39.843 EAL: Trying to obtain current memory policy. 00:04:39.843 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:39.843 EAL: Restoring previous memory policy: 0 00:04:39.843 EAL: request: mp_malloc_sync 00:04:39.843 EAL: No shared files mode enabled, IPC is disabled 00:04:39.843 EAL: Heap on socket 0 was expanded by 2MB 00:04:39.843 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:39.843 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:39.843 EAL: Mem event callback 'spdk:(nil)' registered 00:04:39.843 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:39.843 00:04:39.843 00:04:39.843 CUnit - A unit testing framework for C - Version 2.1-3 00:04:39.843 http://cunit.sourceforge.net/ 00:04:39.843 00:04:39.843 00:04:39.843 Suite: components_suite 00:04:40.104 Test: vtophys_malloc_test ...passed 00:04:40.104 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:40.104 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:40.104 EAL: Restoring previous memory policy: 4 00:04:40.104 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.104 EAL: request: mp_malloc_sync 00:04:40.104 EAL: No shared files mode enabled, IPC is disabled 00:04:40.104 EAL: Heap on socket 0 was expanded by 4MB 00:04:40.104 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.104 EAL: request: mp_malloc_sync 00:04:40.104 EAL: No shared files mode enabled, IPC is disabled 00:04:40.104 EAL: Heap on socket 0 was shrunk by 4MB 00:04:40.104 EAL: Trying to obtain current memory policy. 00:04:40.104 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:40.104 EAL: Restoring previous memory policy: 4 00:04:40.104 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.104 EAL: request: mp_malloc_sync 00:04:40.104 EAL: No shared files mode enabled, IPC is disabled 00:04:40.104 EAL: Heap on socket 0 was expanded by 6MB 00:04:40.104 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.104 EAL: request: mp_malloc_sync 00:04:40.104 EAL: No shared files mode enabled, IPC is disabled 00:04:40.104 EAL: Heap on socket 0 was shrunk by 6MB 00:04:40.104 EAL: Trying to obtain current memory policy. 00:04:40.104 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:40.104 EAL: Restoring previous memory policy: 4 00:04:40.104 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.104 EAL: request: mp_malloc_sync 00:04:40.104 EAL: No shared files mode enabled, IPC is disabled 00:04:40.104 EAL: Heap on socket 0 was expanded by 10MB 00:04:40.104 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.104 EAL: request: mp_malloc_sync 00:04:40.104 EAL: No shared files mode enabled, IPC is disabled 00:04:40.104 EAL: Heap on socket 0 was shrunk by 10MB 00:04:40.104 EAL: Trying to obtain current memory policy. 00:04:40.104 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:40.104 EAL: Restoring previous memory policy: 4 00:04:40.104 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.104 EAL: request: mp_malloc_sync 00:04:40.104 EAL: No shared files mode enabled, IPC is disabled 00:04:40.104 EAL: Heap on socket 0 was expanded by 18MB 00:04:40.104 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.104 EAL: request: mp_malloc_sync 00:04:40.104 EAL: No shared files mode enabled, IPC is disabled 00:04:40.104 EAL: Heap on socket 0 was shrunk by 18MB 00:04:40.104 EAL: Trying to obtain current memory policy. 00:04:40.104 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:40.104 EAL: Restoring previous memory policy: 4 00:04:40.104 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.104 EAL: request: mp_malloc_sync 00:04:40.104 EAL: No shared files mode enabled, IPC is disabled 00:04:40.104 EAL: Heap on socket 0 was expanded by 34MB 00:04:40.104 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.104 EAL: request: mp_malloc_sync 00:04:40.104 EAL: No shared files mode enabled, IPC is disabled 00:04:40.104 EAL: Heap on socket 0 was shrunk by 34MB 00:04:40.104 EAL: Trying to obtain current memory policy. 00:04:40.104 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:40.104 EAL: Restoring previous memory policy: 4 00:04:40.104 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.104 EAL: request: mp_malloc_sync 00:04:40.104 EAL: No shared files mode enabled, IPC is disabled 00:04:40.104 EAL: Heap on socket 0 was expanded by 66MB 00:04:40.365 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.365 EAL: request: mp_malloc_sync 00:04:40.365 EAL: No shared files mode enabled, IPC is disabled 00:04:40.365 EAL: Heap on socket 0 was shrunk by 66MB 00:04:40.365 EAL: Trying to obtain current memory policy. 00:04:40.365 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:40.365 EAL: Restoring previous memory policy: 4 00:04:40.365 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.365 EAL: request: mp_malloc_sync 00:04:40.365 EAL: No shared files mode enabled, IPC is disabled 00:04:40.365 EAL: Heap on socket 0 was expanded by 130MB 00:04:40.624 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.624 EAL: request: mp_malloc_sync 00:04:40.624 EAL: No shared files mode enabled, IPC is disabled 00:04:40.624 EAL: Heap on socket 0 was shrunk by 130MB 00:04:40.624 EAL: Trying to obtain current memory policy. 00:04:40.624 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:40.624 EAL: Restoring previous memory policy: 4 00:04:40.624 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.624 EAL: request: mp_malloc_sync 00:04:40.624 EAL: No shared files mode enabled, IPC is disabled 00:04:40.624 EAL: Heap on socket 0 was expanded by 258MB 00:04:40.884 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.884 EAL: request: mp_malloc_sync 00:04:40.884 EAL: No shared files mode enabled, IPC is disabled 00:04:40.884 EAL: Heap on socket 0 was shrunk by 258MB 00:04:41.146 EAL: Trying to obtain current memory policy. 00:04:41.146 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:41.407 EAL: Restoring previous memory policy: 4 00:04:41.407 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.407 EAL: request: mp_malloc_sync 00:04:41.407 EAL: No shared files mode enabled, IPC is disabled 00:04:41.407 EAL: Heap on socket 0 was expanded by 514MB 00:04:41.979 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.979 EAL: request: mp_malloc_sync 00:04:41.979 EAL: No shared files mode enabled, IPC is disabled 00:04:41.979 EAL: Heap on socket 0 was shrunk by 514MB 00:04:42.549 EAL: Trying to obtain current memory policy. 00:04:42.550 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.807 EAL: Restoring previous memory policy: 4 00:04:42.807 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.807 EAL: request: mp_malloc_sync 00:04:42.807 EAL: No shared files mode enabled, IPC is disabled 00:04:42.807 EAL: Heap on socket 0 was expanded by 1026MB 00:04:43.742 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.742 EAL: request: mp_malloc_sync 00:04:43.742 EAL: No shared files mode enabled, IPC is disabled 00:04:43.742 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:44.678 passed 00:04:44.678 00:04:44.678 Run Summary: Type Total Ran Passed Failed Inactive 00:04:44.678 suites 1 1 n/a 0 0 00:04:44.678 tests 2 2 2 0 0 00:04:44.678 asserts 5390 5390 5390 0 n/a 00:04:44.678 00:04:44.678 Elapsed time = 4.641 seconds 00:04:44.678 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.678 EAL: request: mp_malloc_sync 00:04:44.678 EAL: No shared files mode enabled, IPC is disabled 00:04:44.678 EAL: Heap on socket 0 was shrunk by 2MB 00:04:44.678 EAL: No shared files mode enabled, IPC is disabled 00:04:44.678 EAL: No shared files mode enabled, IPC is disabled 00:04:44.678 EAL: No shared files mode enabled, IPC is disabled 00:04:44.678 ************************************ 00:04:44.678 END TEST env_vtophys 00:04:44.678 ************************************ 00:04:44.678 00:04:44.678 real 0m4.896s 00:04:44.678 user 0m4.100s 00:04:44.678 sys 0m0.646s 00:04:44.678 13:59:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:44.678 13:59:47 -- common/autotest_common.sh@10 -- # set +x 00:04:44.678 13:59:47 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:44.678 13:59:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:44.678 13:59:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:44.678 13:59:47 -- common/autotest_common.sh@10 -- # set +x 00:04:44.678 ************************************ 00:04:44.678 START TEST env_pci 00:04:44.678 ************************************ 00:04:44.678 13:59:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:44.678 00:04:44.678 00:04:44.678 CUnit - A unit testing framework for C - Version 2.1-3 00:04:44.678 http://cunit.sourceforge.net/ 00:04:44.678 00:04:44.678 00:04:44.678 Suite: pci 00:04:44.678 Test: pci_hook ...[2024-12-08 13:59:47.353182] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56014 has claimed it 00:04:44.678 passed 00:04:44.678 00:04:44.678 Run Summary: Type Total Ran Passed Failed Inactive 00:04:44.678 suites 1 1 n/a 0 0 00:04:44.678 tests 1 1 1 0 0 00:04:44.678 asserts 25 25 25 0 n/a 00:04:44.678 00:04:44.678 Elapsed time = 0.003 seconds 00:04:44.678 EAL: Cannot find device (10000:00:01.0) 00:04:44.678 EAL: Failed to attach device on primary process 00:04:44.678 ************************************ 00:04:44.678 END TEST env_pci 00:04:44.678 ************************************ 00:04:44.678 00:04:44.678 real 0m0.056s 00:04:44.678 user 0m0.027s 00:04:44.678 sys 0m0.028s 00:04:44.678 13:59:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:44.678 13:59:47 -- common/autotest_common.sh@10 -- # set +x 00:04:44.678 13:59:47 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:44.678 13:59:47 -- env/env.sh@15 -- # uname 00:04:44.678 13:59:47 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:44.678 13:59:47 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:44.678 13:59:47 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:44.678 13:59:47 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:04:44.678 13:59:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:44.678 13:59:47 -- common/autotest_common.sh@10 -- # set +x 00:04:44.678 ************************************ 00:04:44.678 START TEST env_dpdk_post_init 00:04:44.678 ************************************ 00:04:44.678 13:59:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:44.678 EAL: Detected CPU lcores: 10 00:04:44.678 EAL: Detected NUMA nodes: 1 00:04:44.678 EAL: Detected shared linkage of DPDK 00:04:44.678 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:44.678 EAL: Selected IOVA mode 'PA' 00:04:44.678 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:44.938 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:04:44.938 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:04:44.938 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:08.0 (socket -1) 00:04:44.938 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:09.0 (socket -1) 00:04:44.938 Starting DPDK initialization... 00:04:44.938 Starting SPDK post initialization... 00:04:44.938 SPDK NVMe probe 00:04:44.938 Attaching to 0000:00:06.0 00:04:44.938 Attaching to 0000:00:07.0 00:04:44.938 Attaching to 0000:00:08.0 00:04:44.938 Attaching to 0000:00:09.0 00:04:44.938 Attached to 0000:00:06.0 00:04:44.938 Attached to 0000:00:07.0 00:04:44.938 Attached to 0000:00:09.0 00:04:44.938 Attached to 0000:00:08.0 00:04:44.938 Cleaning up... 00:04:44.938 ************************************ 00:04:44.938 END TEST env_dpdk_post_init 00:04:44.938 ************************************ 00:04:44.938 00:04:44.938 real 0m0.218s 00:04:44.938 user 0m0.058s 00:04:44.938 sys 0m0.061s 00:04:44.938 13:59:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:44.938 13:59:47 -- common/autotest_common.sh@10 -- # set +x 00:04:44.938 13:59:47 -- env/env.sh@26 -- # uname 00:04:44.938 13:59:47 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:44.938 13:59:47 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:44.938 13:59:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:44.938 13:59:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:44.938 13:59:47 -- common/autotest_common.sh@10 -- # set +x 00:04:44.938 ************************************ 00:04:44.938 START TEST env_mem_callbacks 00:04:44.938 ************************************ 00:04:44.938 13:59:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:44.938 EAL: Detected CPU lcores: 10 00:04:44.938 EAL: Detected NUMA nodes: 1 00:04:44.938 EAL: Detected shared linkage of DPDK 00:04:44.938 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:44.938 EAL: Selected IOVA mode 'PA' 00:04:44.938 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:44.938 00:04:44.938 00:04:44.938 CUnit - A unit testing framework for C - Version 2.1-3 00:04:44.938 http://cunit.sourceforge.net/ 00:04:44.938 00:04:44.938 00:04:44.938 Suite: memory 00:04:44.938 Test: test ... 00:04:44.938 register 0x200000200000 2097152 00:04:44.938 malloc 3145728 00:04:45.198 register 0x200000400000 4194304 00:04:45.198 buf 0x2000004fffc0 len 3145728 PASSED 00:04:45.198 malloc 64 00:04:45.198 buf 0x2000004ffec0 len 64 PASSED 00:04:45.198 malloc 4194304 00:04:45.198 register 0x200000800000 6291456 00:04:45.198 buf 0x2000009fffc0 len 4194304 PASSED 00:04:45.198 free 0x2000004fffc0 3145728 00:04:45.198 free 0x2000004ffec0 64 00:04:45.198 unregister 0x200000400000 4194304 PASSED 00:04:45.198 free 0x2000009fffc0 4194304 00:04:45.198 unregister 0x200000800000 6291456 PASSED 00:04:45.198 malloc 8388608 00:04:45.198 register 0x200000400000 10485760 00:04:45.198 buf 0x2000005fffc0 len 8388608 PASSED 00:04:45.198 free 0x2000005fffc0 8388608 00:04:45.198 unregister 0x200000400000 10485760 PASSED 00:04:45.198 passed 00:04:45.198 00:04:45.198 Run Summary: Type Total Ran Passed Failed Inactive 00:04:45.198 suites 1 1 n/a 0 0 00:04:45.198 tests 1 1 1 0 0 00:04:45.198 asserts 15 15 15 0 n/a 00:04:45.198 00:04:45.198 Elapsed time = 0.039 seconds 00:04:45.198 00:04:45.198 real 0m0.205s 00:04:45.198 user 0m0.056s 00:04:45.198 sys 0m0.046s 00:04:45.198 13:59:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:45.198 ************************************ 00:04:45.198 END TEST env_mem_callbacks 00:04:45.198 13:59:47 -- common/autotest_common.sh@10 -- # set +x 00:04:45.198 ************************************ 00:04:45.198 00:04:45.198 real 0m6.048s 00:04:45.198 user 0m4.634s 00:04:45.198 sys 0m0.992s 00:04:45.198 13:59:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:45.198 13:59:47 -- common/autotest_common.sh@10 -- # set +x 00:04:45.198 ************************************ 00:04:45.198 END TEST env 00:04:45.198 ************************************ 00:04:45.198 13:59:47 -- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:45.198 13:59:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:45.198 13:59:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:45.198 13:59:47 -- common/autotest_common.sh@10 -- # set +x 00:04:45.198 ************************************ 00:04:45.198 START TEST rpc 00:04:45.198 ************************************ 00:04:45.198 13:59:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:45.198 * Looking for test storage... 00:04:45.198 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:45.198 13:59:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:45.198 13:59:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:45.198 13:59:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:45.456 13:59:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:45.456 13:59:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:45.456 13:59:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:45.456 13:59:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:45.456 13:59:48 -- scripts/common.sh@335 -- # IFS=.-: 00:04:45.456 13:59:48 -- scripts/common.sh@335 -- # read -ra ver1 00:04:45.456 13:59:48 -- scripts/common.sh@336 -- # IFS=.-: 00:04:45.456 13:59:48 -- scripts/common.sh@336 -- # read -ra ver2 00:04:45.456 13:59:48 -- scripts/common.sh@337 -- # local 'op=<' 00:04:45.456 13:59:48 -- scripts/common.sh@339 -- # ver1_l=2 00:04:45.456 13:59:48 -- scripts/common.sh@340 -- # ver2_l=1 00:04:45.456 13:59:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:45.456 13:59:48 -- scripts/common.sh@343 -- # case "$op" in 00:04:45.456 13:59:48 -- scripts/common.sh@344 -- # : 1 00:04:45.456 13:59:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:45.456 13:59:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:45.456 13:59:48 -- scripts/common.sh@364 -- # decimal 1 00:04:45.456 13:59:48 -- scripts/common.sh@352 -- # local d=1 00:04:45.456 13:59:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:45.456 13:59:48 -- scripts/common.sh@354 -- # echo 1 00:04:45.456 13:59:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:45.456 13:59:48 -- scripts/common.sh@365 -- # decimal 2 00:04:45.456 13:59:48 -- scripts/common.sh@352 -- # local d=2 00:04:45.456 13:59:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:45.456 13:59:48 -- scripts/common.sh@354 -- # echo 2 00:04:45.456 13:59:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:45.456 13:59:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:45.456 13:59:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:45.456 13:59:48 -- scripts/common.sh@367 -- # return 0 00:04:45.456 13:59:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:45.456 13:59:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:45.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.456 --rc genhtml_branch_coverage=1 00:04:45.456 --rc genhtml_function_coverage=1 00:04:45.456 --rc genhtml_legend=1 00:04:45.456 --rc geninfo_all_blocks=1 00:04:45.456 --rc geninfo_unexecuted_blocks=1 00:04:45.456 00:04:45.456 ' 00:04:45.456 13:59:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:45.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.456 --rc genhtml_branch_coverage=1 00:04:45.456 --rc genhtml_function_coverage=1 00:04:45.456 --rc genhtml_legend=1 00:04:45.456 --rc geninfo_all_blocks=1 00:04:45.456 --rc geninfo_unexecuted_blocks=1 00:04:45.456 00:04:45.456 ' 00:04:45.456 13:59:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:45.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.456 --rc genhtml_branch_coverage=1 00:04:45.456 --rc genhtml_function_coverage=1 00:04:45.456 --rc genhtml_legend=1 00:04:45.456 --rc geninfo_all_blocks=1 00:04:45.456 --rc geninfo_unexecuted_blocks=1 00:04:45.456 00:04:45.456 ' 00:04:45.456 13:59:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:45.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.456 --rc genhtml_branch_coverage=1 00:04:45.456 --rc genhtml_function_coverage=1 00:04:45.456 --rc genhtml_legend=1 00:04:45.456 --rc geninfo_all_blocks=1 00:04:45.456 --rc geninfo_unexecuted_blocks=1 00:04:45.456 00:04:45.456 ' 00:04:45.456 13:59:48 -- rpc/rpc.sh@65 -- # spdk_pid=56140 00:04:45.456 13:59:48 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:45.456 13:59:48 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:45.456 13:59:48 -- rpc/rpc.sh@67 -- # waitforlisten 56140 00:04:45.456 13:59:48 -- common/autotest_common.sh@829 -- # '[' -z 56140 ']' 00:04:45.456 13:59:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.456 13:59:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:45.456 13:59:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.456 13:59:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:45.456 13:59:48 -- common/autotest_common.sh@10 -- # set +x 00:04:45.456 [2024-12-08 13:59:48.235233] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:45.456 [2024-12-08 13:59:48.235519] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56140 ] 00:04:45.714 [2024-12-08 13:59:48.382517] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.714 [2024-12-08 13:59:48.521262] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:45.714 [2024-12-08 13:59:48.521400] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:45.714 [2024-12-08 13:59:48.521412] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56140' to capture a snapshot of events at runtime. 00:04:45.714 [2024-12-08 13:59:48.521419] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56140 for offline analysis/debug. 00:04:45.714 [2024-12-08 13:59:48.521445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.280 13:59:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:46.280 13:59:49 -- common/autotest_common.sh@862 -- # return 0 00:04:46.280 13:59:49 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:46.280 13:59:49 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:46.280 13:59:49 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:46.280 13:59:49 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:46.280 13:59:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:46.280 13:59:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:46.280 13:59:49 -- common/autotest_common.sh@10 -- # set +x 00:04:46.280 ************************************ 00:04:46.280 START TEST rpc_integrity 00:04:46.280 ************************************ 00:04:46.280 13:59:49 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:04:46.280 13:59:49 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:46.280 13:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.280 13:59:49 -- common/autotest_common.sh@10 -- # set +x 00:04:46.280 13:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.280 13:59:49 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:46.280 13:59:49 -- rpc/rpc.sh@13 -- # jq length 00:04:46.280 13:59:49 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:46.280 13:59:49 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:46.280 13:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.280 13:59:49 -- common/autotest_common.sh@10 -- # set +x 00:04:46.280 13:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.280 13:59:49 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:46.280 13:59:49 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:46.280 13:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.280 13:59:49 -- common/autotest_common.sh@10 -- # set +x 00:04:46.280 13:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.280 13:59:49 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:46.280 { 00:04:46.280 "name": "Malloc0", 00:04:46.280 "aliases": [ 00:04:46.280 "96110fe4-4f55-42b1-a07a-abedf8187cc8" 00:04:46.280 ], 00:04:46.280 "product_name": "Malloc disk", 00:04:46.280 "block_size": 512, 00:04:46.280 "num_blocks": 16384, 00:04:46.280 "uuid": "96110fe4-4f55-42b1-a07a-abedf8187cc8", 00:04:46.280 "assigned_rate_limits": { 00:04:46.280 "rw_ios_per_sec": 0, 00:04:46.280 "rw_mbytes_per_sec": 0, 00:04:46.280 "r_mbytes_per_sec": 0, 00:04:46.280 "w_mbytes_per_sec": 0 00:04:46.280 }, 00:04:46.280 "claimed": false, 00:04:46.280 "zoned": false, 00:04:46.280 "supported_io_types": { 00:04:46.280 "read": true, 00:04:46.280 "write": true, 00:04:46.280 "unmap": true, 00:04:46.280 "write_zeroes": true, 00:04:46.280 "flush": true, 00:04:46.280 "reset": true, 00:04:46.280 "compare": false, 00:04:46.280 "compare_and_write": false, 00:04:46.280 "abort": true, 00:04:46.280 "nvme_admin": false, 00:04:46.280 "nvme_io": false 00:04:46.280 }, 00:04:46.280 "memory_domains": [ 00:04:46.280 { 00:04:46.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:46.280 "dma_device_type": 2 00:04:46.280 } 00:04:46.280 ], 00:04:46.280 "driver_specific": {} 00:04:46.280 } 00:04:46.280 ]' 00:04:46.280 13:59:49 -- rpc/rpc.sh@17 -- # jq length 00:04:46.280 13:59:49 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:46.280 13:59:49 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:46.280 13:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.280 13:59:49 -- common/autotest_common.sh@10 -- # set +x 00:04:46.280 [2024-12-08 13:59:49.155203] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:46.280 [2024-12-08 13:59:49.155336] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:46.280 [2024-12-08 13:59:49.155359] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:04:46.280 [2024-12-08 13:59:49.155368] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:46.280 [2024-12-08 13:59:49.157030] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:46.280 [2024-12-08 13:59:49.157059] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:46.280 Passthru0 00:04:46.280 13:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.280 13:59:49 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:46.280 13:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.280 13:59:49 -- common/autotest_common.sh@10 -- # set +x 00:04:46.280 13:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.280 13:59:49 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:46.280 { 00:04:46.280 "name": "Malloc0", 00:04:46.280 "aliases": [ 00:04:46.280 "96110fe4-4f55-42b1-a07a-abedf8187cc8" 00:04:46.280 ], 00:04:46.280 "product_name": "Malloc disk", 00:04:46.280 "block_size": 512, 00:04:46.280 "num_blocks": 16384, 00:04:46.280 "uuid": "96110fe4-4f55-42b1-a07a-abedf8187cc8", 00:04:46.280 "assigned_rate_limits": { 00:04:46.280 "rw_ios_per_sec": 0, 00:04:46.280 "rw_mbytes_per_sec": 0, 00:04:46.280 "r_mbytes_per_sec": 0, 00:04:46.280 "w_mbytes_per_sec": 0 00:04:46.280 }, 00:04:46.280 "claimed": true, 00:04:46.280 "claim_type": "exclusive_write", 00:04:46.280 "zoned": false, 00:04:46.280 "supported_io_types": { 00:04:46.280 "read": true, 00:04:46.280 "write": true, 00:04:46.280 "unmap": true, 00:04:46.280 "write_zeroes": true, 00:04:46.280 "flush": true, 00:04:46.280 "reset": true, 00:04:46.280 "compare": false, 00:04:46.280 "compare_and_write": false, 00:04:46.280 "abort": true, 00:04:46.280 "nvme_admin": false, 00:04:46.280 "nvme_io": false 00:04:46.280 }, 00:04:46.280 "memory_domains": [ 00:04:46.280 { 00:04:46.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:46.280 "dma_device_type": 2 00:04:46.280 } 00:04:46.280 ], 00:04:46.280 "driver_specific": {} 00:04:46.280 }, 00:04:46.280 { 00:04:46.280 "name": "Passthru0", 00:04:46.280 "aliases": [ 00:04:46.280 "423a04a4-522a-50b9-87a6-870720c1ebd8" 00:04:46.280 ], 00:04:46.280 "product_name": "passthru", 00:04:46.280 "block_size": 512, 00:04:46.280 "num_blocks": 16384, 00:04:46.280 "uuid": "423a04a4-522a-50b9-87a6-870720c1ebd8", 00:04:46.280 "assigned_rate_limits": { 00:04:46.280 "rw_ios_per_sec": 0, 00:04:46.280 "rw_mbytes_per_sec": 0, 00:04:46.280 "r_mbytes_per_sec": 0, 00:04:46.280 "w_mbytes_per_sec": 0 00:04:46.280 }, 00:04:46.280 "claimed": false, 00:04:46.280 "zoned": false, 00:04:46.280 "supported_io_types": { 00:04:46.280 "read": true, 00:04:46.280 "write": true, 00:04:46.280 "unmap": true, 00:04:46.280 "write_zeroes": true, 00:04:46.280 "flush": true, 00:04:46.280 "reset": true, 00:04:46.280 "compare": false, 00:04:46.280 "compare_and_write": false, 00:04:46.280 "abort": true, 00:04:46.280 "nvme_admin": false, 00:04:46.280 "nvme_io": false 00:04:46.280 }, 00:04:46.280 "memory_domains": [ 00:04:46.280 { 00:04:46.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:46.280 "dma_device_type": 2 00:04:46.280 } 00:04:46.280 ], 00:04:46.280 "driver_specific": { 00:04:46.280 "passthru": { 00:04:46.280 "name": "Passthru0", 00:04:46.280 "base_bdev_name": "Malloc0" 00:04:46.280 } 00:04:46.280 } 00:04:46.280 } 00:04:46.280 ]' 00:04:46.280 13:59:49 -- rpc/rpc.sh@21 -- # jq length 00:04:46.538 13:59:49 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:46.538 13:59:49 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:46.538 13:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.538 13:59:49 -- common/autotest_common.sh@10 -- # set +x 00:04:46.538 13:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.538 13:59:49 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:46.538 13:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.538 13:59:49 -- common/autotest_common.sh@10 -- # set +x 00:04:46.538 13:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.538 13:59:49 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:46.538 13:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.538 13:59:49 -- common/autotest_common.sh@10 -- # set +x 00:04:46.538 13:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.538 13:59:49 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:46.538 13:59:49 -- rpc/rpc.sh@26 -- # jq length 00:04:46.538 ************************************ 00:04:46.538 END TEST rpc_integrity 00:04:46.538 ************************************ 00:04:46.538 13:59:49 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:46.538 00:04:46.538 real 0m0.239s 00:04:46.538 user 0m0.127s 00:04:46.538 sys 0m0.034s 00:04:46.538 13:59:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:46.538 13:59:49 -- common/autotest_common.sh@10 -- # set +x 00:04:46.538 13:59:49 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:46.538 13:59:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:46.538 13:59:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:46.538 13:59:49 -- common/autotest_common.sh@10 -- # set +x 00:04:46.538 ************************************ 00:04:46.538 START TEST rpc_plugins 00:04:46.538 ************************************ 00:04:46.538 13:59:49 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:04:46.538 13:59:49 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:46.538 13:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.538 13:59:49 -- common/autotest_common.sh@10 -- # set +x 00:04:46.538 13:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.538 13:59:49 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:46.538 13:59:49 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:46.538 13:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.538 13:59:49 -- common/autotest_common.sh@10 -- # set +x 00:04:46.538 13:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.538 13:59:49 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:46.538 { 00:04:46.538 "name": "Malloc1", 00:04:46.538 "aliases": [ 00:04:46.538 "5575c985-85aa-462f-95b4-89b3375f65ca" 00:04:46.538 ], 00:04:46.538 "product_name": "Malloc disk", 00:04:46.538 "block_size": 4096, 00:04:46.538 "num_blocks": 256, 00:04:46.538 "uuid": "5575c985-85aa-462f-95b4-89b3375f65ca", 00:04:46.538 "assigned_rate_limits": { 00:04:46.538 "rw_ios_per_sec": 0, 00:04:46.538 "rw_mbytes_per_sec": 0, 00:04:46.538 "r_mbytes_per_sec": 0, 00:04:46.538 "w_mbytes_per_sec": 0 00:04:46.538 }, 00:04:46.538 "claimed": false, 00:04:46.538 "zoned": false, 00:04:46.538 "supported_io_types": { 00:04:46.538 "read": true, 00:04:46.538 "write": true, 00:04:46.538 "unmap": true, 00:04:46.538 "write_zeroes": true, 00:04:46.538 "flush": true, 00:04:46.539 "reset": true, 00:04:46.539 "compare": false, 00:04:46.539 "compare_and_write": false, 00:04:46.539 "abort": true, 00:04:46.539 "nvme_admin": false, 00:04:46.539 "nvme_io": false 00:04:46.539 }, 00:04:46.539 "memory_domains": [ 00:04:46.539 { 00:04:46.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:46.539 "dma_device_type": 2 00:04:46.539 } 00:04:46.539 ], 00:04:46.539 "driver_specific": {} 00:04:46.539 } 00:04:46.539 ]' 00:04:46.539 13:59:49 -- rpc/rpc.sh@32 -- # jq length 00:04:46.539 13:59:49 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:46.539 13:59:49 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:46.539 13:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.539 13:59:49 -- common/autotest_common.sh@10 -- # set +x 00:04:46.539 13:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.539 13:59:49 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:46.539 13:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.539 13:59:49 -- common/autotest_common.sh@10 -- # set +x 00:04:46.539 13:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.539 13:59:49 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:46.539 13:59:49 -- rpc/rpc.sh@36 -- # jq length 00:04:46.539 ************************************ 00:04:46.539 END TEST rpc_plugins 00:04:46.539 ************************************ 00:04:46.539 13:59:49 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:46.539 00:04:46.539 real 0m0.100s 00:04:46.539 user 0m0.056s 00:04:46.539 sys 0m0.017s 00:04:46.539 13:59:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:46.539 13:59:49 -- common/autotest_common.sh@10 -- # set +x 00:04:46.539 13:59:49 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:46.539 13:59:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:46.539 13:59:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:46.539 13:59:49 -- common/autotest_common.sh@10 -- # set +x 00:04:46.539 ************************************ 00:04:46.539 START TEST rpc_trace_cmd_test 00:04:46.539 ************************************ 00:04:46.539 13:59:49 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:04:46.539 13:59:49 -- rpc/rpc.sh@40 -- # local info 00:04:46.539 13:59:49 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:46.539 13:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.539 13:59:49 -- common/autotest_common.sh@10 -- # set +x 00:04:46.797 13:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.797 13:59:49 -- rpc/rpc.sh@42 -- # info='{ 00:04:46.797 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56140", 00:04:46.797 "tpoint_group_mask": "0x8", 00:04:46.797 "iscsi_conn": { 00:04:46.797 "mask": "0x2", 00:04:46.797 "tpoint_mask": "0x0" 00:04:46.797 }, 00:04:46.797 "scsi": { 00:04:46.797 "mask": "0x4", 00:04:46.797 "tpoint_mask": "0x0" 00:04:46.797 }, 00:04:46.797 "bdev": { 00:04:46.797 "mask": "0x8", 00:04:46.797 "tpoint_mask": "0xffffffffffffffff" 00:04:46.797 }, 00:04:46.797 "nvmf_rdma": { 00:04:46.797 "mask": "0x10", 00:04:46.797 "tpoint_mask": "0x0" 00:04:46.797 }, 00:04:46.797 "nvmf_tcp": { 00:04:46.797 "mask": "0x20", 00:04:46.797 "tpoint_mask": "0x0" 00:04:46.797 }, 00:04:46.797 "ftl": { 00:04:46.797 "mask": "0x40", 00:04:46.797 "tpoint_mask": "0x0" 00:04:46.797 }, 00:04:46.797 "blobfs": { 00:04:46.797 "mask": "0x80", 00:04:46.797 "tpoint_mask": "0x0" 00:04:46.797 }, 00:04:46.797 "dsa": { 00:04:46.797 "mask": "0x200", 00:04:46.797 "tpoint_mask": "0x0" 00:04:46.797 }, 00:04:46.797 "thread": { 00:04:46.797 "mask": "0x400", 00:04:46.797 "tpoint_mask": "0x0" 00:04:46.797 }, 00:04:46.797 "nvme_pcie": { 00:04:46.797 "mask": "0x800", 00:04:46.797 "tpoint_mask": "0x0" 00:04:46.797 }, 00:04:46.797 "iaa": { 00:04:46.797 "mask": "0x1000", 00:04:46.797 "tpoint_mask": "0x0" 00:04:46.797 }, 00:04:46.797 "nvme_tcp": { 00:04:46.797 "mask": "0x2000", 00:04:46.797 "tpoint_mask": "0x0" 00:04:46.797 }, 00:04:46.797 "bdev_nvme": { 00:04:46.797 "mask": "0x4000", 00:04:46.797 "tpoint_mask": "0x0" 00:04:46.797 } 00:04:46.797 }' 00:04:46.797 13:59:49 -- rpc/rpc.sh@43 -- # jq length 00:04:46.797 13:59:49 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:04:46.797 13:59:49 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:46.797 13:59:49 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:46.797 13:59:49 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:46.797 13:59:49 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:46.797 13:59:49 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:46.797 13:59:49 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:46.797 13:59:49 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:46.797 ************************************ 00:04:46.797 END TEST rpc_trace_cmd_test 00:04:46.797 ************************************ 00:04:46.797 13:59:49 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:46.797 00:04:46.797 real 0m0.163s 00:04:46.797 user 0m0.132s 00:04:46.797 sys 0m0.021s 00:04:46.797 13:59:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:46.797 13:59:49 -- common/autotest_common.sh@10 -- # set +x 00:04:46.797 13:59:49 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:46.797 13:59:49 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:46.797 13:59:49 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:46.798 13:59:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:46.798 13:59:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:46.798 13:59:49 -- common/autotest_common.sh@10 -- # set +x 00:04:46.798 ************************************ 00:04:46.798 START TEST rpc_daemon_integrity 00:04:46.798 ************************************ 00:04:46.798 13:59:49 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:04:46.798 13:59:49 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:46.798 13:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.798 13:59:49 -- common/autotest_common.sh@10 -- # set +x 00:04:46.798 13:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.798 13:59:49 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:46.798 13:59:49 -- rpc/rpc.sh@13 -- # jq length 00:04:46.798 13:59:49 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:46.798 13:59:49 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:46.798 13:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.798 13:59:49 -- common/autotest_common.sh@10 -- # set +x 00:04:46.798 13:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.798 13:59:49 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:46.798 13:59:49 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:46.798 13:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.798 13:59:49 -- common/autotest_common.sh@10 -- # set +x 00:04:47.056 13:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.056 13:59:49 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:47.056 { 00:04:47.056 "name": "Malloc2", 00:04:47.056 "aliases": [ 00:04:47.056 "91e807b2-1d2b-4cd7-939c-6787d339c37f" 00:04:47.056 ], 00:04:47.056 "product_name": "Malloc disk", 00:04:47.056 "block_size": 512, 00:04:47.056 "num_blocks": 16384, 00:04:47.056 "uuid": "91e807b2-1d2b-4cd7-939c-6787d339c37f", 00:04:47.056 "assigned_rate_limits": { 00:04:47.056 "rw_ios_per_sec": 0, 00:04:47.056 "rw_mbytes_per_sec": 0, 00:04:47.056 "r_mbytes_per_sec": 0, 00:04:47.056 "w_mbytes_per_sec": 0 00:04:47.056 }, 00:04:47.056 "claimed": false, 00:04:47.056 "zoned": false, 00:04:47.056 "supported_io_types": { 00:04:47.056 "read": true, 00:04:47.056 "write": true, 00:04:47.056 "unmap": true, 00:04:47.056 "write_zeroes": true, 00:04:47.056 "flush": true, 00:04:47.056 "reset": true, 00:04:47.056 "compare": false, 00:04:47.056 "compare_and_write": false, 00:04:47.056 "abort": true, 00:04:47.056 "nvme_admin": false, 00:04:47.056 "nvme_io": false 00:04:47.056 }, 00:04:47.056 "memory_domains": [ 00:04:47.056 { 00:04:47.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:47.056 "dma_device_type": 2 00:04:47.056 } 00:04:47.056 ], 00:04:47.056 "driver_specific": {} 00:04:47.056 } 00:04:47.056 ]' 00:04:47.056 13:59:49 -- rpc/rpc.sh@17 -- # jq length 00:04:47.056 13:59:49 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:47.056 13:59:49 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:47.056 13:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.056 13:59:49 -- common/autotest_common.sh@10 -- # set +x 00:04:47.056 [2024-12-08 13:59:49.763678] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:47.056 [2024-12-08 13:59:49.763721] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:47.056 [2024-12-08 13:59:49.763735] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:04:47.057 [2024-12-08 13:59:49.763743] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:47.057 [2024-12-08 13:59:49.765408] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:47.057 [2024-12-08 13:59:49.765438] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:47.057 Passthru0 00:04:47.057 13:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.057 13:59:49 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:47.057 13:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.057 13:59:49 -- common/autotest_common.sh@10 -- # set +x 00:04:47.057 13:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.057 13:59:49 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:47.057 { 00:04:47.057 "name": "Malloc2", 00:04:47.057 "aliases": [ 00:04:47.057 "91e807b2-1d2b-4cd7-939c-6787d339c37f" 00:04:47.057 ], 00:04:47.057 "product_name": "Malloc disk", 00:04:47.057 "block_size": 512, 00:04:47.057 "num_blocks": 16384, 00:04:47.057 "uuid": "91e807b2-1d2b-4cd7-939c-6787d339c37f", 00:04:47.057 "assigned_rate_limits": { 00:04:47.057 "rw_ios_per_sec": 0, 00:04:47.057 "rw_mbytes_per_sec": 0, 00:04:47.057 "r_mbytes_per_sec": 0, 00:04:47.057 "w_mbytes_per_sec": 0 00:04:47.057 }, 00:04:47.057 "claimed": true, 00:04:47.057 "claim_type": "exclusive_write", 00:04:47.057 "zoned": false, 00:04:47.057 "supported_io_types": { 00:04:47.057 "read": true, 00:04:47.057 "write": true, 00:04:47.057 "unmap": true, 00:04:47.057 "write_zeroes": true, 00:04:47.057 "flush": true, 00:04:47.057 "reset": true, 00:04:47.057 "compare": false, 00:04:47.057 "compare_and_write": false, 00:04:47.057 "abort": true, 00:04:47.057 "nvme_admin": false, 00:04:47.057 "nvme_io": false 00:04:47.057 }, 00:04:47.057 "memory_domains": [ 00:04:47.057 { 00:04:47.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:47.057 "dma_device_type": 2 00:04:47.057 } 00:04:47.057 ], 00:04:47.057 "driver_specific": {} 00:04:47.057 }, 00:04:47.057 { 00:04:47.057 "name": "Passthru0", 00:04:47.057 "aliases": [ 00:04:47.057 "ea2bf72d-0b2f-51ce-a66b-d3d7384374e8" 00:04:47.057 ], 00:04:47.057 "product_name": "passthru", 00:04:47.057 "block_size": 512, 00:04:47.057 "num_blocks": 16384, 00:04:47.057 "uuid": "ea2bf72d-0b2f-51ce-a66b-d3d7384374e8", 00:04:47.057 "assigned_rate_limits": { 00:04:47.057 "rw_ios_per_sec": 0, 00:04:47.057 "rw_mbytes_per_sec": 0, 00:04:47.057 "r_mbytes_per_sec": 0, 00:04:47.057 "w_mbytes_per_sec": 0 00:04:47.057 }, 00:04:47.057 "claimed": false, 00:04:47.057 "zoned": false, 00:04:47.057 "supported_io_types": { 00:04:47.057 "read": true, 00:04:47.057 "write": true, 00:04:47.057 "unmap": true, 00:04:47.057 "write_zeroes": true, 00:04:47.057 "flush": true, 00:04:47.057 "reset": true, 00:04:47.057 "compare": false, 00:04:47.057 "compare_and_write": false, 00:04:47.057 "abort": true, 00:04:47.057 "nvme_admin": false, 00:04:47.057 "nvme_io": false 00:04:47.057 }, 00:04:47.057 "memory_domains": [ 00:04:47.057 { 00:04:47.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:47.057 "dma_device_type": 2 00:04:47.057 } 00:04:47.057 ], 00:04:47.057 "driver_specific": { 00:04:47.057 "passthru": { 00:04:47.057 "name": "Passthru0", 00:04:47.057 "base_bdev_name": "Malloc2" 00:04:47.057 } 00:04:47.057 } 00:04:47.057 } 00:04:47.057 ]' 00:04:47.057 13:59:49 -- rpc/rpc.sh@21 -- # jq length 00:04:47.057 13:59:49 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:47.057 13:59:49 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:47.057 13:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.057 13:59:49 -- common/autotest_common.sh@10 -- # set +x 00:04:47.057 13:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.057 13:59:49 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:47.057 13:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.057 13:59:49 -- common/autotest_common.sh@10 -- # set +x 00:04:47.057 13:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.057 13:59:49 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:47.057 13:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.057 13:59:49 -- common/autotest_common.sh@10 -- # set +x 00:04:47.057 13:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.057 13:59:49 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:47.057 13:59:49 -- rpc/rpc.sh@26 -- # jq length 00:04:47.057 ************************************ 00:04:47.057 END TEST rpc_daemon_integrity 00:04:47.057 ************************************ 00:04:47.057 13:59:49 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:47.057 00:04:47.057 real 0m0.222s 00:04:47.057 user 0m0.131s 00:04:47.057 sys 0m0.020s 00:04:47.057 13:59:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:47.057 13:59:49 -- common/autotest_common.sh@10 -- # set +x 00:04:47.057 13:59:49 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:47.057 13:59:49 -- rpc/rpc.sh@84 -- # killprocess 56140 00:04:47.057 13:59:49 -- common/autotest_common.sh@936 -- # '[' -z 56140 ']' 00:04:47.057 13:59:49 -- common/autotest_common.sh@940 -- # kill -0 56140 00:04:47.057 13:59:49 -- common/autotest_common.sh@941 -- # uname 00:04:47.057 13:59:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:47.057 13:59:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56140 00:04:47.057 killing process with pid 56140 00:04:47.057 13:59:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:47.057 13:59:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:47.057 13:59:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56140' 00:04:47.057 13:59:49 -- common/autotest_common.sh@955 -- # kill 56140 00:04:47.057 13:59:49 -- common/autotest_common.sh@960 -- # wait 56140 00:04:48.433 00:04:48.433 real 0m3.096s 00:04:48.433 user 0m3.512s 00:04:48.433 sys 0m0.531s 00:04:48.433 13:59:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:48.433 13:59:51 -- common/autotest_common.sh@10 -- # set +x 00:04:48.434 ************************************ 00:04:48.434 END TEST rpc 00:04:48.434 ************************************ 00:04:48.434 13:59:51 -- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:48.434 13:59:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:48.434 13:59:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:48.434 13:59:51 -- common/autotest_common.sh@10 -- # set +x 00:04:48.434 ************************************ 00:04:48.434 START TEST rpc_client 00:04:48.434 ************************************ 00:04:48.434 13:59:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:48.434 * Looking for test storage... 00:04:48.434 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:48.434 13:59:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:48.434 13:59:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:48.434 13:59:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:48.434 13:59:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:48.434 13:59:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:48.434 13:59:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:48.434 13:59:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:48.434 13:59:51 -- scripts/common.sh@335 -- # IFS=.-: 00:04:48.434 13:59:51 -- scripts/common.sh@335 -- # read -ra ver1 00:04:48.434 13:59:51 -- scripts/common.sh@336 -- # IFS=.-: 00:04:48.434 13:59:51 -- scripts/common.sh@336 -- # read -ra ver2 00:04:48.434 13:59:51 -- scripts/common.sh@337 -- # local 'op=<' 00:04:48.434 13:59:51 -- scripts/common.sh@339 -- # ver1_l=2 00:04:48.434 13:59:51 -- scripts/common.sh@340 -- # ver2_l=1 00:04:48.434 13:59:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:48.434 13:59:51 -- scripts/common.sh@343 -- # case "$op" in 00:04:48.434 13:59:51 -- scripts/common.sh@344 -- # : 1 00:04:48.434 13:59:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:48.434 13:59:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:48.434 13:59:51 -- scripts/common.sh@364 -- # decimal 1 00:04:48.434 13:59:51 -- scripts/common.sh@352 -- # local d=1 00:04:48.434 13:59:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:48.434 13:59:51 -- scripts/common.sh@354 -- # echo 1 00:04:48.434 13:59:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:48.434 13:59:51 -- scripts/common.sh@365 -- # decimal 2 00:04:48.434 13:59:51 -- scripts/common.sh@352 -- # local d=2 00:04:48.434 13:59:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:48.434 13:59:51 -- scripts/common.sh@354 -- # echo 2 00:04:48.434 13:59:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:48.434 13:59:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:48.434 13:59:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:48.434 13:59:51 -- scripts/common.sh@367 -- # return 0 00:04:48.434 13:59:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:48.434 13:59:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:48.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.434 --rc genhtml_branch_coverage=1 00:04:48.434 --rc genhtml_function_coverage=1 00:04:48.434 --rc genhtml_legend=1 00:04:48.434 --rc geninfo_all_blocks=1 00:04:48.434 --rc geninfo_unexecuted_blocks=1 00:04:48.434 00:04:48.434 ' 00:04:48.434 13:59:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:48.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.434 --rc genhtml_branch_coverage=1 00:04:48.434 --rc genhtml_function_coverage=1 00:04:48.434 --rc genhtml_legend=1 00:04:48.434 --rc geninfo_all_blocks=1 00:04:48.434 --rc geninfo_unexecuted_blocks=1 00:04:48.434 00:04:48.434 ' 00:04:48.434 13:59:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:48.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.434 --rc genhtml_branch_coverage=1 00:04:48.434 --rc genhtml_function_coverage=1 00:04:48.434 --rc genhtml_legend=1 00:04:48.434 --rc geninfo_all_blocks=1 00:04:48.434 --rc geninfo_unexecuted_blocks=1 00:04:48.434 00:04:48.434 ' 00:04:48.434 13:59:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:48.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.434 --rc genhtml_branch_coverage=1 00:04:48.434 --rc genhtml_function_coverage=1 00:04:48.434 --rc genhtml_legend=1 00:04:48.434 --rc geninfo_all_blocks=1 00:04:48.434 --rc geninfo_unexecuted_blocks=1 00:04:48.434 00:04:48.434 ' 00:04:48.434 13:59:51 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:48.434 OK 00:04:48.694 13:59:51 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:48.694 00:04:48.694 real 0m0.188s 00:04:48.694 user 0m0.105s 00:04:48.694 sys 0m0.089s 00:04:48.694 ************************************ 00:04:48.694 END TEST rpc_client 00:04:48.694 ************************************ 00:04:48.694 13:59:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:48.694 13:59:51 -- common/autotest_common.sh@10 -- # set +x 00:04:48.694 13:59:51 -- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:48.694 13:59:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:48.694 13:59:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:48.694 13:59:51 -- common/autotest_common.sh@10 -- # set +x 00:04:48.694 ************************************ 00:04:48.694 START TEST json_config 00:04:48.694 ************************************ 00:04:48.694 13:59:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:48.694 13:59:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:48.694 13:59:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:48.694 13:59:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:48.694 13:59:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:48.694 13:59:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:48.694 13:59:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:48.694 13:59:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:48.694 13:59:51 -- scripts/common.sh@335 -- # IFS=.-: 00:04:48.694 13:59:51 -- scripts/common.sh@335 -- # read -ra ver1 00:04:48.694 13:59:51 -- scripts/common.sh@336 -- # IFS=.-: 00:04:48.694 13:59:51 -- scripts/common.sh@336 -- # read -ra ver2 00:04:48.694 13:59:51 -- scripts/common.sh@337 -- # local 'op=<' 00:04:48.694 13:59:51 -- scripts/common.sh@339 -- # ver1_l=2 00:04:48.694 13:59:51 -- scripts/common.sh@340 -- # ver2_l=1 00:04:48.694 13:59:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:48.694 13:59:51 -- scripts/common.sh@343 -- # case "$op" in 00:04:48.694 13:59:51 -- scripts/common.sh@344 -- # : 1 00:04:48.694 13:59:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:48.694 13:59:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:48.694 13:59:51 -- scripts/common.sh@364 -- # decimal 1 00:04:48.694 13:59:51 -- scripts/common.sh@352 -- # local d=1 00:04:48.694 13:59:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:48.694 13:59:51 -- scripts/common.sh@354 -- # echo 1 00:04:48.694 13:59:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:48.694 13:59:51 -- scripts/common.sh@365 -- # decimal 2 00:04:48.694 13:59:51 -- scripts/common.sh@352 -- # local d=2 00:04:48.694 13:59:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:48.694 13:59:51 -- scripts/common.sh@354 -- # echo 2 00:04:48.694 13:59:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:48.694 13:59:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:48.694 13:59:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:48.694 13:59:51 -- scripts/common.sh@367 -- # return 0 00:04:48.694 13:59:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:48.694 13:59:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:48.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.694 --rc genhtml_branch_coverage=1 00:04:48.694 --rc genhtml_function_coverage=1 00:04:48.694 --rc genhtml_legend=1 00:04:48.694 --rc geninfo_all_blocks=1 00:04:48.694 --rc geninfo_unexecuted_blocks=1 00:04:48.694 00:04:48.694 ' 00:04:48.694 13:59:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:48.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.694 --rc genhtml_branch_coverage=1 00:04:48.694 --rc genhtml_function_coverage=1 00:04:48.694 --rc genhtml_legend=1 00:04:48.694 --rc geninfo_all_blocks=1 00:04:48.694 --rc geninfo_unexecuted_blocks=1 00:04:48.694 00:04:48.694 ' 00:04:48.694 13:59:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:48.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.694 --rc genhtml_branch_coverage=1 00:04:48.694 --rc genhtml_function_coverage=1 00:04:48.694 --rc genhtml_legend=1 00:04:48.694 --rc geninfo_all_blocks=1 00:04:48.694 --rc geninfo_unexecuted_blocks=1 00:04:48.694 00:04:48.694 ' 00:04:48.694 13:59:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:48.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.694 --rc genhtml_branch_coverage=1 00:04:48.694 --rc genhtml_function_coverage=1 00:04:48.694 --rc genhtml_legend=1 00:04:48.694 --rc geninfo_all_blocks=1 00:04:48.694 --rc geninfo_unexecuted_blocks=1 00:04:48.694 00:04:48.694 ' 00:04:48.694 13:59:51 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:48.694 13:59:51 -- nvmf/common.sh@7 -- # uname -s 00:04:48.695 13:59:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:48.695 13:59:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:48.695 13:59:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:48.695 13:59:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:48.695 13:59:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:48.695 13:59:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:48.695 13:59:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:48.695 13:59:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:48.695 13:59:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:48.695 13:59:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:48.695 13:59:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:097a4e0a-03c7-4aa5-9446-1422b0e678be 00:04:48.695 13:59:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=097a4e0a-03c7-4aa5-9446-1422b0e678be 00:04:48.695 13:59:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:48.695 13:59:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:48.695 13:59:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:48.695 13:59:51 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:48.695 13:59:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:48.695 13:59:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:48.695 13:59:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:48.695 13:59:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.695 13:59:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.695 13:59:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.695 13:59:51 -- paths/export.sh@5 -- # export PATH 00:04:48.695 13:59:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.695 13:59:51 -- nvmf/common.sh@46 -- # : 0 00:04:48.695 13:59:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:48.695 13:59:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:48.695 13:59:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:48.695 13:59:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:48.695 13:59:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:48.695 13:59:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:48.695 13:59:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:48.695 13:59:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:48.695 13:59:51 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:04:48.695 13:59:51 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:04:48.695 13:59:51 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:04:48.695 13:59:51 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:48.695 WARNING: No tests are enabled so not running JSON configuration tests 00:04:48.695 13:59:51 -- json_config/json_config.sh@26 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:48.695 13:59:51 -- json_config/json_config.sh@27 -- # exit 0 00:04:48.695 00:04:48.695 real 0m0.129s 00:04:48.695 user 0m0.079s 00:04:48.695 sys 0m0.051s 00:04:48.695 13:59:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:48.695 13:59:51 -- common/autotest_common.sh@10 -- # set +x 00:04:48.695 ************************************ 00:04:48.695 END TEST json_config 00:04:48.695 ************************************ 00:04:48.695 13:59:51 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:48.695 13:59:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:48.695 13:59:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:48.695 13:59:51 -- common/autotest_common.sh@10 -- # set +x 00:04:48.695 ************************************ 00:04:48.695 START TEST json_config_extra_key 00:04:48.695 ************************************ 00:04:48.695 13:59:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:48.957 13:59:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:48.957 13:59:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:48.957 13:59:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:48.957 13:59:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:48.957 13:59:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:48.957 13:59:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:48.957 13:59:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:48.957 13:59:51 -- scripts/common.sh@335 -- # IFS=.-: 00:04:48.957 13:59:51 -- scripts/common.sh@335 -- # read -ra ver1 00:04:48.957 13:59:51 -- scripts/common.sh@336 -- # IFS=.-: 00:04:48.957 13:59:51 -- scripts/common.sh@336 -- # read -ra ver2 00:04:48.957 13:59:51 -- scripts/common.sh@337 -- # local 'op=<' 00:04:48.957 13:59:51 -- scripts/common.sh@339 -- # ver1_l=2 00:04:48.957 13:59:51 -- scripts/common.sh@340 -- # ver2_l=1 00:04:48.957 13:59:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:48.957 13:59:51 -- scripts/common.sh@343 -- # case "$op" in 00:04:48.957 13:59:51 -- scripts/common.sh@344 -- # : 1 00:04:48.957 13:59:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:48.957 13:59:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:48.957 13:59:51 -- scripts/common.sh@364 -- # decimal 1 00:04:48.957 13:59:51 -- scripts/common.sh@352 -- # local d=1 00:04:48.957 13:59:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:48.957 13:59:51 -- scripts/common.sh@354 -- # echo 1 00:04:48.957 13:59:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:48.957 13:59:51 -- scripts/common.sh@365 -- # decimal 2 00:04:48.957 13:59:51 -- scripts/common.sh@352 -- # local d=2 00:04:48.957 13:59:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:48.957 13:59:51 -- scripts/common.sh@354 -- # echo 2 00:04:48.957 13:59:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:48.957 13:59:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:48.957 13:59:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:48.957 13:59:51 -- scripts/common.sh@367 -- # return 0 00:04:48.957 13:59:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:48.957 13:59:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:48.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.957 --rc genhtml_branch_coverage=1 00:04:48.957 --rc genhtml_function_coverage=1 00:04:48.957 --rc genhtml_legend=1 00:04:48.957 --rc geninfo_all_blocks=1 00:04:48.957 --rc geninfo_unexecuted_blocks=1 00:04:48.957 00:04:48.957 ' 00:04:48.957 13:59:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:48.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.957 --rc genhtml_branch_coverage=1 00:04:48.957 --rc genhtml_function_coverage=1 00:04:48.957 --rc genhtml_legend=1 00:04:48.957 --rc geninfo_all_blocks=1 00:04:48.957 --rc geninfo_unexecuted_blocks=1 00:04:48.957 00:04:48.957 ' 00:04:48.957 13:59:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:48.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.957 --rc genhtml_branch_coverage=1 00:04:48.957 --rc genhtml_function_coverage=1 00:04:48.957 --rc genhtml_legend=1 00:04:48.957 --rc geninfo_all_blocks=1 00:04:48.957 --rc geninfo_unexecuted_blocks=1 00:04:48.957 00:04:48.957 ' 00:04:48.957 13:59:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:48.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.957 --rc genhtml_branch_coverage=1 00:04:48.957 --rc genhtml_function_coverage=1 00:04:48.957 --rc genhtml_legend=1 00:04:48.957 --rc geninfo_all_blocks=1 00:04:48.957 --rc geninfo_unexecuted_blocks=1 00:04:48.957 00:04:48.957 ' 00:04:48.957 13:59:51 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:48.957 13:59:51 -- nvmf/common.sh@7 -- # uname -s 00:04:48.957 13:59:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:48.957 13:59:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:48.957 13:59:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:48.957 13:59:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:48.957 13:59:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:48.957 13:59:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:48.957 13:59:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:48.957 13:59:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:48.957 13:59:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:48.957 13:59:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:48.957 13:59:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:097a4e0a-03c7-4aa5-9446-1422b0e678be 00:04:48.957 13:59:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=097a4e0a-03c7-4aa5-9446-1422b0e678be 00:04:48.957 13:59:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:48.957 13:59:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:48.957 13:59:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:48.957 13:59:51 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:48.957 13:59:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:48.957 13:59:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:48.957 13:59:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:48.957 13:59:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.957 13:59:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.957 13:59:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.957 13:59:51 -- paths/export.sh@5 -- # export PATH 00:04:48.957 13:59:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.957 13:59:51 -- nvmf/common.sh@46 -- # : 0 00:04:48.957 13:59:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:48.957 13:59:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:48.957 13:59:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:48.957 13:59:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:48.957 13:59:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:48.957 13:59:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:48.957 13:59:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:48.957 13:59:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:48.957 13:59:51 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:04:48.957 13:59:51 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:04:48.957 13:59:51 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:48.957 13:59:51 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:04:48.957 13:59:51 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:48.957 13:59:51 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:04:48.957 13:59:51 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:48.957 13:59:51 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:04:48.957 13:59:51 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:48.957 13:59:51 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:04:48.957 INFO: launching applications... 00:04:48.957 13:59:51 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:48.957 13:59:51 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:04:48.957 13:59:51 -- json_config/json_config_extra_key.sh@25 -- # shift 00:04:48.957 13:59:51 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:04:48.957 13:59:51 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:04:48.957 13:59:51 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=56439 00:04:48.957 13:59:51 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:04:48.957 Waiting for target to run... 00:04:48.957 13:59:51 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 56439 /var/tmp/spdk_tgt.sock 00:04:48.958 13:59:51 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:48.958 13:59:51 -- common/autotest_common.sh@829 -- # '[' -z 56439 ']' 00:04:48.958 13:59:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:48.958 13:59:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:48.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:48.958 13:59:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:48.958 13:59:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:48.958 13:59:51 -- common/autotest_common.sh@10 -- # set +x 00:04:48.958 [2024-12-08 13:59:51.798795] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:48.958 [2024-12-08 13:59:51.799053] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56439 ] 00:04:49.315 [2024-12-08 13:59:52.106592] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.597 [2024-12-08 13:59:52.246503] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:49.597 [2024-12-08 13:59:52.246804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.855 13:59:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:49.855 00:04:49.855 INFO: shutting down applications... 00:04:49.855 13:59:52 -- common/autotest_common.sh@862 -- # return 0 00:04:49.855 13:59:52 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:04:49.855 13:59:52 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:04:49.855 13:59:52 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:04:49.855 13:59:52 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:04:49.855 13:59:52 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:04:49.855 13:59:52 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 56439 ]] 00:04:49.855 13:59:52 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 56439 00:04:49.855 13:59:52 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:04:49.855 13:59:52 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:49.855 13:59:52 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56439 00:04:49.855 13:59:52 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:50.424 13:59:53 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:50.424 13:59:53 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:50.424 13:59:53 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56439 00:04:50.424 13:59:53 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:50.995 13:59:53 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:50.995 13:59:53 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:50.995 13:59:53 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56439 00:04:50.995 13:59:53 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:51.254 13:59:54 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:51.254 13:59:54 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:51.254 13:59:54 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56439 00:04:51.254 13:59:54 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:51.825 13:59:54 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:51.825 13:59:54 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:51.825 13:59:54 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56439 00:04:51.825 13:59:54 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:04:51.825 13:59:54 -- json_config/json_config_extra_key.sh@52 -- # break 00:04:51.825 SPDK target shutdown done 00:04:51.825 Success 00:04:51.825 13:59:54 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:04:51.825 13:59:54 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:04:51.825 13:59:54 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:04:51.825 00:04:51.825 real 0m3.040s 00:04:51.825 user 0m2.454s 00:04:51.825 sys 0m0.384s 00:04:51.825 13:59:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:51.825 ************************************ 00:04:51.825 END TEST json_config_extra_key 00:04:51.825 ************************************ 00:04:51.825 13:59:54 -- common/autotest_common.sh@10 -- # set +x 00:04:51.825 13:59:54 -- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:51.825 13:59:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:51.825 13:59:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:51.825 13:59:54 -- common/autotest_common.sh@10 -- # set +x 00:04:51.825 ************************************ 00:04:51.825 START TEST alias_rpc 00:04:51.825 ************************************ 00:04:51.825 13:59:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:51.825 * Looking for test storage... 00:04:52.087 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:52.087 13:59:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:52.087 13:59:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:52.087 13:59:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:52.087 13:59:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:52.087 13:59:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:52.087 13:59:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:52.087 13:59:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:52.087 13:59:54 -- scripts/common.sh@335 -- # IFS=.-: 00:04:52.087 13:59:54 -- scripts/common.sh@335 -- # read -ra ver1 00:04:52.087 13:59:54 -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.087 13:59:54 -- scripts/common.sh@336 -- # read -ra ver2 00:04:52.087 13:59:54 -- scripts/common.sh@337 -- # local 'op=<' 00:04:52.087 13:59:54 -- scripts/common.sh@339 -- # ver1_l=2 00:04:52.087 13:59:54 -- scripts/common.sh@340 -- # ver2_l=1 00:04:52.087 13:59:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:52.087 13:59:54 -- scripts/common.sh@343 -- # case "$op" in 00:04:52.087 13:59:54 -- scripts/common.sh@344 -- # : 1 00:04:52.087 13:59:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:52.087 13:59:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.087 13:59:54 -- scripts/common.sh@364 -- # decimal 1 00:04:52.087 13:59:54 -- scripts/common.sh@352 -- # local d=1 00:04:52.087 13:59:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.087 13:59:54 -- scripts/common.sh@354 -- # echo 1 00:04:52.087 13:59:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:52.087 13:59:54 -- scripts/common.sh@365 -- # decimal 2 00:04:52.087 13:59:54 -- scripts/common.sh@352 -- # local d=2 00:04:52.087 13:59:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.087 13:59:54 -- scripts/common.sh@354 -- # echo 2 00:04:52.087 13:59:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:52.087 13:59:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:52.087 13:59:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:52.087 13:59:54 -- scripts/common.sh@367 -- # return 0 00:04:52.087 13:59:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.087 13:59:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:52.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.087 --rc genhtml_branch_coverage=1 00:04:52.087 --rc genhtml_function_coverage=1 00:04:52.087 --rc genhtml_legend=1 00:04:52.087 --rc geninfo_all_blocks=1 00:04:52.087 --rc geninfo_unexecuted_blocks=1 00:04:52.087 00:04:52.087 ' 00:04:52.087 13:59:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:52.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.087 --rc genhtml_branch_coverage=1 00:04:52.087 --rc genhtml_function_coverage=1 00:04:52.087 --rc genhtml_legend=1 00:04:52.087 --rc geninfo_all_blocks=1 00:04:52.087 --rc geninfo_unexecuted_blocks=1 00:04:52.087 00:04:52.087 ' 00:04:52.087 13:59:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:52.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.087 --rc genhtml_branch_coverage=1 00:04:52.087 --rc genhtml_function_coverage=1 00:04:52.087 --rc genhtml_legend=1 00:04:52.087 --rc geninfo_all_blocks=1 00:04:52.087 --rc geninfo_unexecuted_blocks=1 00:04:52.087 00:04:52.087 ' 00:04:52.087 13:59:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:52.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.087 --rc genhtml_branch_coverage=1 00:04:52.087 --rc genhtml_function_coverage=1 00:04:52.087 --rc genhtml_legend=1 00:04:52.087 --rc geninfo_all_blocks=1 00:04:52.087 --rc geninfo_unexecuted_blocks=1 00:04:52.087 00:04:52.087 ' 00:04:52.087 13:59:54 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:52.087 13:59:54 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=56537 00:04:52.087 13:59:54 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 56537 00:04:52.087 13:59:54 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:52.087 13:59:54 -- common/autotest_common.sh@829 -- # '[' -z 56537 ']' 00:04:52.087 13:59:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.087 13:59:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:52.087 13:59:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.087 13:59:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:52.087 13:59:54 -- common/autotest_common.sh@10 -- # set +x 00:04:52.087 [2024-12-08 13:59:54.890834] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:52.087 [2024-12-08 13:59:54.891099] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56537 ] 00:04:52.350 [2024-12-08 13:59:55.038127] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.350 [2024-12-08 13:59:55.178006] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:52.350 [2024-12-08 13:59:55.178278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.916 13:59:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:52.916 13:59:55 -- common/autotest_common.sh@862 -- # return 0 00:04:52.916 13:59:55 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:53.174 13:59:55 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 56537 00:04:53.174 13:59:55 -- common/autotest_common.sh@936 -- # '[' -z 56537 ']' 00:04:53.174 13:59:55 -- common/autotest_common.sh@940 -- # kill -0 56537 00:04:53.174 13:59:55 -- common/autotest_common.sh@941 -- # uname 00:04:53.174 13:59:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:53.174 13:59:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56537 00:04:53.174 13:59:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:53.174 13:59:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:53.174 13:59:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56537' 00:04:53.174 killing process with pid 56537 00:04:53.174 13:59:55 -- common/autotest_common.sh@955 -- # kill 56537 00:04:53.174 13:59:55 -- common/autotest_common.sh@960 -- # wait 56537 00:04:54.551 00:04:54.551 real 0m2.414s 00:04:54.551 user 0m2.474s 00:04:54.551 sys 0m0.385s 00:04:54.551 13:59:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:54.551 13:59:57 -- common/autotest_common.sh@10 -- # set +x 00:04:54.551 ************************************ 00:04:54.551 END TEST alias_rpc 00:04:54.551 ************************************ 00:04:54.551 13:59:57 -- spdk/autotest.sh@169 -- # [[ 0 -eq 0 ]] 00:04:54.551 13:59:57 -- spdk/autotest.sh@170 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:54.551 13:59:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:54.551 13:59:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:54.551 13:59:57 -- common/autotest_common.sh@10 -- # set +x 00:04:54.551 ************************************ 00:04:54.551 START TEST spdkcli_tcp 00:04:54.551 ************************************ 00:04:54.551 13:59:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:54.551 * Looking for test storage... 00:04:54.551 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:54.551 13:59:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:54.551 13:59:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:54.551 13:59:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:54.551 13:59:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:54.551 13:59:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:54.551 13:59:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:54.551 13:59:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:54.551 13:59:57 -- scripts/common.sh@335 -- # IFS=.-: 00:04:54.551 13:59:57 -- scripts/common.sh@335 -- # read -ra ver1 00:04:54.551 13:59:57 -- scripts/common.sh@336 -- # IFS=.-: 00:04:54.551 13:59:57 -- scripts/common.sh@336 -- # read -ra ver2 00:04:54.551 13:59:57 -- scripts/common.sh@337 -- # local 'op=<' 00:04:54.551 13:59:57 -- scripts/common.sh@339 -- # ver1_l=2 00:04:54.551 13:59:57 -- scripts/common.sh@340 -- # ver2_l=1 00:04:54.551 13:59:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:54.551 13:59:57 -- scripts/common.sh@343 -- # case "$op" in 00:04:54.551 13:59:57 -- scripts/common.sh@344 -- # : 1 00:04:54.551 13:59:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:54.551 13:59:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:54.551 13:59:57 -- scripts/common.sh@364 -- # decimal 1 00:04:54.551 13:59:57 -- scripts/common.sh@352 -- # local d=1 00:04:54.551 13:59:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:54.551 13:59:57 -- scripts/common.sh@354 -- # echo 1 00:04:54.551 13:59:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:54.551 13:59:57 -- scripts/common.sh@365 -- # decimal 2 00:04:54.551 13:59:57 -- scripts/common.sh@352 -- # local d=2 00:04:54.551 13:59:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:54.551 13:59:57 -- scripts/common.sh@354 -- # echo 2 00:04:54.551 13:59:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:54.551 13:59:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:54.551 13:59:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:54.551 13:59:57 -- scripts/common.sh@367 -- # return 0 00:04:54.551 13:59:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:54.551 13:59:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:54.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.551 --rc genhtml_branch_coverage=1 00:04:54.551 --rc genhtml_function_coverage=1 00:04:54.551 --rc genhtml_legend=1 00:04:54.551 --rc geninfo_all_blocks=1 00:04:54.551 --rc geninfo_unexecuted_blocks=1 00:04:54.551 00:04:54.551 ' 00:04:54.551 13:59:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:54.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.551 --rc genhtml_branch_coverage=1 00:04:54.551 --rc genhtml_function_coverage=1 00:04:54.551 --rc genhtml_legend=1 00:04:54.551 --rc geninfo_all_blocks=1 00:04:54.551 --rc geninfo_unexecuted_blocks=1 00:04:54.551 00:04:54.551 ' 00:04:54.551 13:59:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:54.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.551 --rc genhtml_branch_coverage=1 00:04:54.551 --rc genhtml_function_coverage=1 00:04:54.551 --rc genhtml_legend=1 00:04:54.551 --rc geninfo_all_blocks=1 00:04:54.551 --rc geninfo_unexecuted_blocks=1 00:04:54.551 00:04:54.551 ' 00:04:54.551 13:59:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:54.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.551 --rc genhtml_branch_coverage=1 00:04:54.551 --rc genhtml_function_coverage=1 00:04:54.551 --rc genhtml_legend=1 00:04:54.551 --rc geninfo_all_blocks=1 00:04:54.551 --rc geninfo_unexecuted_blocks=1 00:04:54.551 00:04:54.551 ' 00:04:54.551 13:59:57 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:54.551 13:59:57 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:54.551 13:59:57 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:54.551 13:59:57 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:54.551 13:59:57 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:54.551 13:59:57 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:54.551 13:59:57 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:54.551 13:59:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:54.551 13:59:57 -- common/autotest_common.sh@10 -- # set +x 00:04:54.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.551 13:59:57 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=56621 00:04:54.551 13:59:57 -- spdkcli/tcp.sh@27 -- # waitforlisten 56621 00:04:54.551 13:59:57 -- common/autotest_common.sh@829 -- # '[' -z 56621 ']' 00:04:54.551 13:59:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.551 13:59:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:54.551 13:59:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.551 13:59:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:54.551 13:59:57 -- common/autotest_common.sh@10 -- # set +x 00:04:54.551 13:59:57 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:54.551 [2024-12-08 13:59:57.369464] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:54.551 [2024-12-08 13:59:57.369574] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56621 ] 00:04:54.811 [2024-12-08 13:59:57.516407] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:54.811 [2024-12-08 13:59:57.695433] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:54.811 [2024-12-08 13:59:57.695769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.811 [2024-12-08 13:59:57.695916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.193 13:59:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:56.193 13:59:58 -- common/autotest_common.sh@862 -- # return 0 00:04:56.193 13:59:58 -- spdkcli/tcp.sh@31 -- # socat_pid=56651 00:04:56.193 13:59:58 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:56.193 13:59:58 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:56.454 [ 00:04:56.454 "bdev_malloc_delete", 00:04:56.454 "bdev_malloc_create", 00:04:56.454 "bdev_null_resize", 00:04:56.454 "bdev_null_delete", 00:04:56.454 "bdev_null_create", 00:04:56.454 "bdev_nvme_cuse_unregister", 00:04:56.454 "bdev_nvme_cuse_register", 00:04:56.454 "bdev_opal_new_user", 00:04:56.454 "bdev_opal_set_lock_state", 00:04:56.454 "bdev_opal_delete", 00:04:56.454 "bdev_opal_get_info", 00:04:56.454 "bdev_opal_create", 00:04:56.454 "bdev_nvme_opal_revert", 00:04:56.454 "bdev_nvme_opal_init", 00:04:56.454 "bdev_nvme_send_cmd", 00:04:56.454 "bdev_nvme_get_path_iostat", 00:04:56.454 "bdev_nvme_get_mdns_discovery_info", 00:04:56.454 "bdev_nvme_stop_mdns_discovery", 00:04:56.454 "bdev_nvme_start_mdns_discovery", 00:04:56.454 "bdev_nvme_set_multipath_policy", 00:04:56.454 "bdev_nvme_set_preferred_path", 00:04:56.454 "bdev_nvme_get_io_paths", 00:04:56.454 "bdev_nvme_remove_error_injection", 00:04:56.454 "bdev_nvme_add_error_injection", 00:04:56.454 "bdev_nvme_get_discovery_info", 00:04:56.454 "bdev_nvme_stop_discovery", 00:04:56.454 "bdev_nvme_start_discovery", 00:04:56.454 "bdev_nvme_get_controller_health_info", 00:04:56.454 "bdev_nvme_disable_controller", 00:04:56.454 "bdev_nvme_enable_controller", 00:04:56.454 "bdev_nvme_reset_controller", 00:04:56.454 "bdev_nvme_get_transport_statistics", 00:04:56.454 "bdev_nvme_apply_firmware", 00:04:56.454 "bdev_nvme_detach_controller", 00:04:56.454 "bdev_nvme_get_controllers", 00:04:56.454 "bdev_nvme_attach_controller", 00:04:56.454 "bdev_nvme_set_hotplug", 00:04:56.454 "bdev_nvme_set_options", 00:04:56.454 "bdev_passthru_delete", 00:04:56.454 "bdev_passthru_create", 00:04:56.454 "bdev_lvol_grow_lvstore", 00:04:56.454 "bdev_lvol_get_lvols", 00:04:56.454 "bdev_lvol_get_lvstores", 00:04:56.454 "bdev_lvol_delete", 00:04:56.454 "bdev_lvol_set_read_only", 00:04:56.454 "bdev_lvol_resize", 00:04:56.455 "bdev_lvol_decouple_parent", 00:04:56.455 "bdev_lvol_inflate", 00:04:56.455 "bdev_lvol_rename", 00:04:56.455 "bdev_lvol_clone_bdev", 00:04:56.455 "bdev_lvol_clone", 00:04:56.455 "bdev_lvol_snapshot", 00:04:56.455 "bdev_lvol_create", 00:04:56.455 "bdev_lvol_delete_lvstore", 00:04:56.455 "bdev_lvol_rename_lvstore", 00:04:56.455 "bdev_lvol_create_lvstore", 00:04:56.455 "bdev_raid_set_options", 00:04:56.455 "bdev_raid_remove_base_bdev", 00:04:56.455 "bdev_raid_add_base_bdev", 00:04:56.455 "bdev_raid_delete", 00:04:56.455 "bdev_raid_create", 00:04:56.455 "bdev_raid_get_bdevs", 00:04:56.455 "bdev_error_inject_error", 00:04:56.455 "bdev_error_delete", 00:04:56.455 "bdev_error_create", 00:04:56.455 "bdev_split_delete", 00:04:56.455 "bdev_split_create", 00:04:56.455 "bdev_delay_delete", 00:04:56.455 "bdev_delay_create", 00:04:56.455 "bdev_delay_update_latency", 00:04:56.455 "bdev_zone_block_delete", 00:04:56.455 "bdev_zone_block_create", 00:04:56.455 "blobfs_create", 00:04:56.455 "blobfs_detect", 00:04:56.455 "blobfs_set_cache_size", 00:04:56.455 "bdev_xnvme_delete", 00:04:56.455 "bdev_xnvme_create", 00:04:56.455 "bdev_aio_delete", 00:04:56.455 "bdev_aio_rescan", 00:04:56.455 "bdev_aio_create", 00:04:56.455 "bdev_ftl_set_property", 00:04:56.455 "bdev_ftl_get_properties", 00:04:56.455 "bdev_ftl_get_stats", 00:04:56.455 "bdev_ftl_unmap", 00:04:56.455 "bdev_ftl_unload", 00:04:56.455 "bdev_ftl_delete", 00:04:56.455 "bdev_ftl_load", 00:04:56.455 "bdev_ftl_create", 00:04:56.455 "bdev_virtio_attach_controller", 00:04:56.455 "bdev_virtio_scsi_get_devices", 00:04:56.455 "bdev_virtio_detach_controller", 00:04:56.455 "bdev_virtio_blk_set_hotplug", 00:04:56.455 "bdev_iscsi_delete", 00:04:56.455 "bdev_iscsi_create", 00:04:56.455 "bdev_iscsi_set_options", 00:04:56.455 "accel_error_inject_error", 00:04:56.455 "ioat_scan_accel_module", 00:04:56.455 "dsa_scan_accel_module", 00:04:56.455 "iaa_scan_accel_module", 00:04:56.455 "iscsi_set_options", 00:04:56.455 "iscsi_get_auth_groups", 00:04:56.455 "iscsi_auth_group_remove_secret", 00:04:56.455 "iscsi_auth_group_add_secret", 00:04:56.455 "iscsi_delete_auth_group", 00:04:56.455 "iscsi_create_auth_group", 00:04:56.455 "iscsi_set_discovery_auth", 00:04:56.455 "iscsi_get_options", 00:04:56.455 "iscsi_target_node_request_logout", 00:04:56.455 "iscsi_target_node_set_redirect", 00:04:56.455 "iscsi_target_node_set_auth", 00:04:56.455 "iscsi_target_node_add_lun", 00:04:56.455 "iscsi_get_connections", 00:04:56.455 "iscsi_portal_group_set_auth", 00:04:56.455 "iscsi_start_portal_group", 00:04:56.455 "iscsi_delete_portal_group", 00:04:56.455 "iscsi_create_portal_group", 00:04:56.455 "iscsi_get_portal_groups", 00:04:56.455 "iscsi_delete_target_node", 00:04:56.455 "iscsi_target_node_remove_pg_ig_maps", 00:04:56.455 "iscsi_target_node_add_pg_ig_maps", 00:04:56.455 "iscsi_create_target_node", 00:04:56.455 "iscsi_get_target_nodes", 00:04:56.455 "iscsi_delete_initiator_group", 00:04:56.455 "iscsi_initiator_group_remove_initiators", 00:04:56.455 "iscsi_initiator_group_add_initiators", 00:04:56.455 "iscsi_create_initiator_group", 00:04:56.455 "iscsi_get_initiator_groups", 00:04:56.455 "nvmf_set_crdt", 00:04:56.455 "nvmf_set_config", 00:04:56.455 "nvmf_set_max_subsystems", 00:04:56.455 "nvmf_subsystem_get_listeners", 00:04:56.455 "nvmf_subsystem_get_qpairs", 00:04:56.455 "nvmf_subsystem_get_controllers", 00:04:56.455 "nvmf_get_stats", 00:04:56.455 "nvmf_get_transports", 00:04:56.455 "nvmf_create_transport", 00:04:56.455 "nvmf_get_targets", 00:04:56.455 "nvmf_delete_target", 00:04:56.455 "nvmf_create_target", 00:04:56.455 "nvmf_subsystem_allow_any_host", 00:04:56.455 "nvmf_subsystem_remove_host", 00:04:56.455 "nvmf_subsystem_add_host", 00:04:56.455 "nvmf_subsystem_remove_ns", 00:04:56.455 "nvmf_subsystem_add_ns", 00:04:56.455 "nvmf_subsystem_listener_set_ana_state", 00:04:56.455 "nvmf_discovery_get_referrals", 00:04:56.455 "nvmf_discovery_remove_referral", 00:04:56.455 "nvmf_discovery_add_referral", 00:04:56.455 "nvmf_subsystem_remove_listener", 00:04:56.455 "nvmf_subsystem_add_listener", 00:04:56.455 "nvmf_delete_subsystem", 00:04:56.455 "nvmf_create_subsystem", 00:04:56.455 "nvmf_get_subsystems", 00:04:56.455 "env_dpdk_get_mem_stats", 00:04:56.455 "nbd_get_disks", 00:04:56.455 "nbd_stop_disk", 00:04:56.455 "nbd_start_disk", 00:04:56.455 "ublk_recover_disk", 00:04:56.455 "ublk_get_disks", 00:04:56.455 "ublk_stop_disk", 00:04:56.455 "ublk_start_disk", 00:04:56.455 "ublk_destroy_target", 00:04:56.455 "ublk_create_target", 00:04:56.455 "virtio_blk_create_transport", 00:04:56.455 "virtio_blk_get_transports", 00:04:56.455 "vhost_controller_set_coalescing", 00:04:56.455 "vhost_get_controllers", 00:04:56.455 "vhost_delete_controller", 00:04:56.455 "vhost_create_blk_controller", 00:04:56.455 "vhost_scsi_controller_remove_target", 00:04:56.455 "vhost_scsi_controller_add_target", 00:04:56.455 "vhost_start_scsi_controller", 00:04:56.455 "vhost_create_scsi_controller", 00:04:56.455 "thread_set_cpumask", 00:04:56.455 "framework_get_scheduler", 00:04:56.455 "framework_set_scheduler", 00:04:56.455 "framework_get_reactors", 00:04:56.455 "thread_get_io_channels", 00:04:56.455 "thread_get_pollers", 00:04:56.455 "thread_get_stats", 00:04:56.455 "framework_monitor_context_switch", 00:04:56.455 "spdk_kill_instance", 00:04:56.455 "log_enable_timestamps", 00:04:56.455 "log_get_flags", 00:04:56.455 "log_clear_flag", 00:04:56.455 "log_set_flag", 00:04:56.455 "log_get_level", 00:04:56.455 "log_set_level", 00:04:56.455 "log_get_print_level", 00:04:56.455 "log_set_print_level", 00:04:56.455 "framework_enable_cpumask_locks", 00:04:56.455 "framework_disable_cpumask_locks", 00:04:56.455 "framework_wait_init", 00:04:56.455 "framework_start_init", 00:04:56.455 "scsi_get_devices", 00:04:56.455 "bdev_get_histogram", 00:04:56.455 "bdev_enable_histogram", 00:04:56.455 "bdev_set_qos_limit", 00:04:56.455 "bdev_set_qd_sampling_period", 00:04:56.455 "bdev_get_bdevs", 00:04:56.455 "bdev_reset_iostat", 00:04:56.455 "bdev_get_iostat", 00:04:56.455 "bdev_examine", 00:04:56.455 "bdev_wait_for_examine", 00:04:56.455 "bdev_set_options", 00:04:56.455 "notify_get_notifications", 00:04:56.455 "notify_get_types", 00:04:56.455 "accel_get_stats", 00:04:56.455 "accel_set_options", 00:04:56.455 "accel_set_driver", 00:04:56.455 "accel_crypto_key_destroy", 00:04:56.455 "accel_crypto_keys_get", 00:04:56.455 "accel_crypto_key_create", 00:04:56.455 "accel_assign_opc", 00:04:56.455 "accel_get_module_info", 00:04:56.455 "accel_get_opc_assignments", 00:04:56.455 "vmd_rescan", 00:04:56.455 "vmd_remove_device", 00:04:56.455 "vmd_enable", 00:04:56.455 "sock_set_default_impl", 00:04:56.455 "sock_impl_set_options", 00:04:56.455 "sock_impl_get_options", 00:04:56.455 "iobuf_get_stats", 00:04:56.455 "iobuf_set_options", 00:04:56.455 "framework_get_pci_devices", 00:04:56.455 "framework_get_config", 00:04:56.455 "framework_get_subsystems", 00:04:56.455 "trace_get_info", 00:04:56.455 "trace_get_tpoint_group_mask", 00:04:56.455 "trace_disable_tpoint_group", 00:04:56.455 "trace_enable_tpoint_group", 00:04:56.455 "trace_clear_tpoint_mask", 00:04:56.455 "trace_set_tpoint_mask", 00:04:56.455 "spdk_get_version", 00:04:56.455 "rpc_get_methods" 00:04:56.455 ] 00:04:56.455 13:59:59 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:56.455 13:59:59 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:56.455 13:59:59 -- common/autotest_common.sh@10 -- # set +x 00:04:56.455 13:59:59 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:56.455 13:59:59 -- spdkcli/tcp.sh@38 -- # killprocess 56621 00:04:56.455 13:59:59 -- common/autotest_common.sh@936 -- # '[' -z 56621 ']' 00:04:56.455 13:59:59 -- common/autotest_common.sh@940 -- # kill -0 56621 00:04:56.455 13:59:59 -- common/autotest_common.sh@941 -- # uname 00:04:56.455 13:59:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:56.455 13:59:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56621 00:04:56.455 killing process with pid 56621 00:04:56.455 13:59:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:56.455 13:59:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:56.455 13:59:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56621' 00:04:56.455 13:59:59 -- common/autotest_common.sh@955 -- # kill 56621 00:04:56.455 13:59:59 -- common/autotest_common.sh@960 -- # wait 56621 00:04:57.830 ************************************ 00:04:57.830 END TEST spdkcli_tcp 00:04:57.830 ************************************ 00:04:57.830 00:04:57.830 real 0m3.309s 00:04:57.830 user 0m6.118s 00:04:57.830 sys 0m0.492s 00:04:57.830 14:00:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:57.830 14:00:00 -- common/autotest_common.sh@10 -- # set +x 00:04:57.830 14:00:00 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:57.830 14:00:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:57.830 14:00:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:57.830 14:00:00 -- common/autotest_common.sh@10 -- # set +x 00:04:57.831 ************************************ 00:04:57.831 START TEST dpdk_mem_utility 00:04:57.831 ************************************ 00:04:57.831 14:00:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:57.831 * Looking for test storage... 00:04:57.831 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:57.831 14:00:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:57.831 14:00:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:57.831 14:00:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:57.831 14:00:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:57.831 14:00:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:57.831 14:00:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:57.831 14:00:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:57.831 14:00:00 -- scripts/common.sh@335 -- # IFS=.-: 00:04:57.831 14:00:00 -- scripts/common.sh@335 -- # read -ra ver1 00:04:57.831 14:00:00 -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.831 14:00:00 -- scripts/common.sh@336 -- # read -ra ver2 00:04:57.831 14:00:00 -- scripts/common.sh@337 -- # local 'op=<' 00:04:57.831 14:00:00 -- scripts/common.sh@339 -- # ver1_l=2 00:04:57.831 14:00:00 -- scripts/common.sh@340 -- # ver2_l=1 00:04:57.831 14:00:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:57.831 14:00:00 -- scripts/common.sh@343 -- # case "$op" in 00:04:57.831 14:00:00 -- scripts/common.sh@344 -- # : 1 00:04:57.831 14:00:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:57.831 14:00:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.831 14:00:00 -- scripts/common.sh@364 -- # decimal 1 00:04:57.831 14:00:00 -- scripts/common.sh@352 -- # local d=1 00:04:57.831 14:00:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.831 14:00:00 -- scripts/common.sh@354 -- # echo 1 00:04:57.831 14:00:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:57.831 14:00:00 -- scripts/common.sh@365 -- # decimal 2 00:04:57.831 14:00:00 -- scripts/common.sh@352 -- # local d=2 00:04:57.831 14:00:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.831 14:00:00 -- scripts/common.sh@354 -- # echo 2 00:04:57.831 14:00:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:57.831 14:00:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:57.831 14:00:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:57.831 14:00:00 -- scripts/common.sh@367 -- # return 0 00:04:57.831 14:00:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.831 14:00:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:57.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.831 --rc genhtml_branch_coverage=1 00:04:57.831 --rc genhtml_function_coverage=1 00:04:57.831 --rc genhtml_legend=1 00:04:57.831 --rc geninfo_all_blocks=1 00:04:57.831 --rc geninfo_unexecuted_blocks=1 00:04:57.831 00:04:57.831 ' 00:04:57.831 14:00:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:57.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.831 --rc genhtml_branch_coverage=1 00:04:57.831 --rc genhtml_function_coverage=1 00:04:57.831 --rc genhtml_legend=1 00:04:57.831 --rc geninfo_all_blocks=1 00:04:57.831 --rc geninfo_unexecuted_blocks=1 00:04:57.831 00:04:57.831 ' 00:04:57.831 14:00:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:57.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.831 --rc genhtml_branch_coverage=1 00:04:57.831 --rc genhtml_function_coverage=1 00:04:57.831 --rc genhtml_legend=1 00:04:57.831 --rc geninfo_all_blocks=1 00:04:57.831 --rc geninfo_unexecuted_blocks=1 00:04:57.831 00:04:57.831 ' 00:04:57.831 14:00:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:57.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.831 --rc genhtml_branch_coverage=1 00:04:57.831 --rc genhtml_function_coverage=1 00:04:57.831 --rc genhtml_legend=1 00:04:57.831 --rc geninfo_all_blocks=1 00:04:57.831 --rc geninfo_unexecuted_blocks=1 00:04:57.831 00:04:57.831 ' 00:04:57.831 14:00:00 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:57.831 14:00:00 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=56733 00:04:57.831 14:00:00 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 56733 00:04:57.831 14:00:00 -- common/autotest_common.sh@829 -- # '[' -z 56733 ']' 00:04:57.831 14:00:00 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:57.831 14:00:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.831 14:00:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:57.831 14:00:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.831 14:00:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:57.831 14:00:00 -- common/autotest_common.sh@10 -- # set +x 00:04:57.831 [2024-12-08 14:00:00.701768] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:57.831 [2024-12-08 14:00:00.702029] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56733 ] 00:04:58.091 [2024-12-08 14:00:00.851906] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.351 [2024-12-08 14:00:01.028668] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:58.351 [2024-12-08 14:00:01.029013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.292 14:00:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:59.292 14:00:02 -- common/autotest_common.sh@862 -- # return 0 00:04:59.292 14:00:02 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:59.292 14:00:02 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:59.292 14:00:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.292 14:00:02 -- common/autotest_common.sh@10 -- # set +x 00:04:59.292 { 00:04:59.292 "filename": "/tmp/spdk_mem_dump.txt" 00:04:59.292 } 00:04:59.292 14:00:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.292 14:00:02 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:59.292 DPDK memory size 820.000000 MiB in 1 heap(s) 00:04:59.292 1 heaps totaling size 820.000000 MiB 00:04:59.292 size: 820.000000 MiB heap id: 0 00:04:59.292 end heaps---------- 00:04:59.292 8 mempools totaling size 598.116089 MiB 00:04:59.292 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:59.292 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:59.292 size: 84.521057 MiB name: bdev_io_56733 00:04:59.292 size: 51.011292 MiB name: evtpool_56733 00:04:59.292 size: 50.003479 MiB name: msgpool_56733 00:04:59.292 size: 21.763794 MiB name: PDU_Pool 00:04:59.292 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:59.292 size: 0.026123 MiB name: Session_Pool 00:04:59.292 end mempools------- 00:04:59.292 6 memzones totaling size 4.142822 MiB 00:04:59.292 size: 1.000366 MiB name: RG_ring_0_56733 00:04:59.292 size: 1.000366 MiB name: RG_ring_1_56733 00:04:59.292 size: 1.000366 MiB name: RG_ring_4_56733 00:04:59.292 size: 1.000366 MiB name: RG_ring_5_56733 00:04:59.292 size: 0.125366 MiB name: RG_ring_2_56733 00:04:59.292 size: 0.015991 MiB name: RG_ring_3_56733 00:04:59.292 end memzones------- 00:04:59.292 14:00:02 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:59.553 heap id: 0 total size: 820.000000 MiB number of busy elements: 300 number of free elements: 18 00:04:59.553 list of free elements. size: 18.451538 MiB 00:04:59.553 element at address: 0x200000400000 with size: 1.999451 MiB 00:04:59.553 element at address: 0x200000800000 with size: 1.996887 MiB 00:04:59.553 element at address: 0x200007000000 with size: 1.995972 MiB 00:04:59.553 element at address: 0x20000b200000 with size: 1.995972 MiB 00:04:59.553 element at address: 0x200019100040 with size: 0.999939 MiB 00:04:59.553 element at address: 0x200019500040 with size: 0.999939 MiB 00:04:59.553 element at address: 0x200019600000 with size: 0.999084 MiB 00:04:59.553 element at address: 0x200003e00000 with size: 0.996094 MiB 00:04:59.553 element at address: 0x200032200000 with size: 0.994324 MiB 00:04:59.553 element at address: 0x200018e00000 with size: 0.959656 MiB 00:04:59.553 element at address: 0x200019900040 with size: 0.936401 MiB 00:04:59.553 element at address: 0x200000200000 with size: 0.829224 MiB 00:04:59.553 element at address: 0x20001b000000 with size: 0.564880 MiB 00:04:59.553 element at address: 0x200019200000 with size: 0.487976 MiB 00:04:59.553 element at address: 0x200019a00000 with size: 0.485413 MiB 00:04:59.553 element at address: 0x200013800000 with size: 0.467651 MiB 00:04:59.553 element at address: 0x200028400000 with size: 0.390442 MiB 00:04:59.553 element at address: 0x200003a00000 with size: 0.352234 MiB 00:04:59.553 list of standard malloc elements. size: 199.284058 MiB 00:04:59.553 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:04:59.553 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:04:59.553 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:04:59.553 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:59.553 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:04:59.553 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:59.553 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:04:59.553 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:59.553 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:04:59.553 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:04:59.553 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:04:59.553 element at address: 0x2000002d4480 with size: 0.000244 MiB 00:04:59.553 element at address: 0x2000002d4580 with size: 0.000244 MiB 00:04:59.553 element at address: 0x2000002d4680 with size: 0.000244 MiB 00:04:59.553 element at address: 0x2000002d4780 with size: 0.000244 MiB 00:04:59.553 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:04:59.553 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:04:59.553 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:04:59.553 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:04:59.553 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:04:59.553 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:04:59.553 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:04:59.553 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:04:59.553 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:04:59.553 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:04:59.553 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:04:59.553 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:04:59.553 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:04:59.553 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:04:59.553 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:04:59.553 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:04:59.553 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:04:59.553 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:04:59.553 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:04:59.553 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:04:59.553 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:04:59.553 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:04:59.553 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:04:59.553 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:04:59.553 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:04:59.553 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:04:59.553 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:04:59.553 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:04:59.553 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:04:59.553 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:04:59.553 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:04:59.553 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:04:59.554 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:04:59.554 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:04:59.554 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:04:59.554 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:04:59.554 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:04:59.554 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:04:59.554 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:04:59.554 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:04:59.554 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:04:59.554 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:04:59.554 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:04:59.554 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:04:59.554 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:04:59.554 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:04:59.554 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:04:59.554 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:04:59.554 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:04:59.554 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:59.554 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:59.554 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x200003aff980 with size: 0.000244 MiB 00:04:59.554 element at address: 0x200003affa80 with size: 0.000244 MiB 00:04:59.554 element at address: 0x200003eff000 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:04:59.554 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:04:59.554 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:04:59.554 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:04:59.554 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:04:59.554 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:04:59.554 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:04:59.554 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:04:59.554 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:04:59.554 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:04:59.554 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:04:59.554 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:04:59.554 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:04:59.554 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:04:59.554 element at address: 0x200013877b80 with size: 0.000244 MiB 00:04:59.554 element at address: 0x200013877c80 with size: 0.000244 MiB 00:04:59.554 element at address: 0x200013877d80 with size: 0.000244 MiB 00:04:59.554 element at address: 0x200013877e80 with size: 0.000244 MiB 00:04:59.554 element at address: 0x200013877f80 with size: 0.000244 MiB 00:04:59.554 element at address: 0x200013878080 with size: 0.000244 MiB 00:04:59.554 element at address: 0x200013878180 with size: 0.000244 MiB 00:04:59.554 element at address: 0x200013878280 with size: 0.000244 MiB 00:04:59.554 element at address: 0x200013878380 with size: 0.000244 MiB 00:04:59.554 element at address: 0x200013878480 with size: 0.000244 MiB 00:04:59.554 element at address: 0x200013878580 with size: 0.000244 MiB 00:04:59.554 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:04:59.554 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:04:59.554 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x200019abc680 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:04:59.554 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:04:59.555 element at address: 0x200028463f40 with size: 0.000244 MiB 00:04:59.555 element at address: 0x200028464040 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846ad00 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846af80 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846b080 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846b180 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846b280 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846b380 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846b480 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846b580 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846b680 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846b780 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846b880 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846b980 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846be80 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846c080 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846c180 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846c280 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846c380 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846c480 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846c580 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846c680 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846c780 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846c880 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846c980 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846d080 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846d180 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846d280 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846d380 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846d480 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846d580 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846d680 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846d780 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846d880 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846d980 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846da80 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846db80 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846de80 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846df80 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846e080 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846e180 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846e280 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846e380 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846e480 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846e580 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846e680 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846e780 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846e880 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846e980 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846f080 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846f180 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846f280 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846f380 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846f480 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846f580 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846f680 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846f780 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846f880 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846f980 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:04:59.555 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:04:59.555 list of memzone associated elements. size: 602.264404 MiB 00:04:59.555 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:04:59.555 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:59.555 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:04:59.555 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:59.555 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:04:59.555 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_56733_0 00:04:59.555 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:04:59.555 associated memzone info: size: 48.002930 MiB name: MP_evtpool_56733_0 00:04:59.555 element at address: 0x200003fff340 with size: 48.003113 MiB 00:04:59.555 associated memzone info: size: 48.002930 MiB name: MP_msgpool_56733_0 00:04:59.555 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:04:59.555 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:59.555 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:04:59.555 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:59.555 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:04:59.555 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_56733 00:04:59.555 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:04:59.555 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_56733 00:04:59.555 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:59.555 associated memzone info: size: 1.007996 MiB name: MP_evtpool_56733 00:04:59.555 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:04:59.555 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:59.555 element at address: 0x200019abc780 with size: 1.008179 MiB 00:04:59.555 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:59.555 element at address: 0x200018efde00 with size: 1.008179 MiB 00:04:59.555 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:59.555 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:04:59.555 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:59.555 element at address: 0x200003eff100 with size: 1.000549 MiB 00:04:59.555 associated memzone info: size: 1.000366 MiB name: RG_ring_0_56733 00:04:59.555 element at address: 0x200003affb80 with size: 1.000549 MiB 00:04:59.555 associated memzone info: size: 1.000366 MiB name: RG_ring_1_56733 00:04:59.555 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:04:59.556 associated memzone info: size: 1.000366 MiB name: RG_ring_4_56733 00:04:59.556 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:04:59.556 associated memzone info: size: 1.000366 MiB name: RG_ring_5_56733 00:04:59.556 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:04:59.556 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_56733 00:04:59.556 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:04:59.556 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:59.556 element at address: 0x200013878680 with size: 0.500549 MiB 00:04:59.556 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:59.556 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:04:59.556 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:59.556 element at address: 0x200003adf740 with size: 0.125549 MiB 00:04:59.556 associated memzone info: size: 0.125366 MiB name: RG_ring_2_56733 00:04:59.556 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:04:59.556 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:59.556 element at address: 0x200028464140 with size: 0.023804 MiB 00:04:59.556 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:59.556 element at address: 0x200003adb500 with size: 0.016174 MiB 00:04:59.556 associated memzone info: size: 0.015991 MiB name: RG_ring_3_56733 00:04:59.556 element at address: 0x20002846a2c0 with size: 0.002502 MiB 00:04:59.556 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:59.556 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:04:59.556 associated memzone info: size: 0.000183 MiB name: MP_msgpool_56733 00:04:59.556 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:04:59.556 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_56733 00:04:59.556 element at address: 0x20002846ae00 with size: 0.000366 MiB 00:04:59.556 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:59.556 14:00:02 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:59.556 14:00:02 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 56733 00:04:59.556 14:00:02 -- common/autotest_common.sh@936 -- # '[' -z 56733 ']' 00:04:59.556 14:00:02 -- common/autotest_common.sh@940 -- # kill -0 56733 00:04:59.556 14:00:02 -- common/autotest_common.sh@941 -- # uname 00:04:59.556 14:00:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:59.556 14:00:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56733 00:04:59.556 14:00:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:59.556 14:00:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:59.556 14:00:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56733' 00:04:59.556 killing process with pid 56733 00:04:59.556 14:00:02 -- common/autotest_common.sh@955 -- # kill 56733 00:04:59.556 14:00:02 -- common/autotest_common.sh@960 -- # wait 56733 00:05:00.957 00:05:00.957 real 0m3.009s 00:05:00.957 user 0m3.126s 00:05:00.957 sys 0m0.400s 00:05:00.957 14:00:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:00.957 ************************************ 00:05:00.957 END TEST dpdk_mem_utility 00:05:00.957 ************************************ 00:05:00.957 14:00:03 -- common/autotest_common.sh@10 -- # set +x 00:05:00.957 14:00:03 -- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:00.957 14:00:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:00.957 14:00:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:00.957 14:00:03 -- common/autotest_common.sh@10 -- # set +x 00:05:00.957 ************************************ 00:05:00.957 START TEST event 00:05:00.957 ************************************ 00:05:00.957 14:00:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:00.957 * Looking for test storage... 00:05:00.957 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:00.957 14:00:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:00.957 14:00:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:00.957 14:00:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:00.957 14:00:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:00.957 14:00:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:00.957 14:00:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:00.957 14:00:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:00.957 14:00:03 -- scripts/common.sh@335 -- # IFS=.-: 00:05:00.957 14:00:03 -- scripts/common.sh@335 -- # read -ra ver1 00:05:00.957 14:00:03 -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.957 14:00:03 -- scripts/common.sh@336 -- # read -ra ver2 00:05:00.957 14:00:03 -- scripts/common.sh@337 -- # local 'op=<' 00:05:00.957 14:00:03 -- scripts/common.sh@339 -- # ver1_l=2 00:05:00.957 14:00:03 -- scripts/common.sh@340 -- # ver2_l=1 00:05:00.957 14:00:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:00.957 14:00:03 -- scripts/common.sh@343 -- # case "$op" in 00:05:00.957 14:00:03 -- scripts/common.sh@344 -- # : 1 00:05:00.957 14:00:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:00.957 14:00:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.957 14:00:03 -- scripts/common.sh@364 -- # decimal 1 00:05:00.957 14:00:03 -- scripts/common.sh@352 -- # local d=1 00:05:00.957 14:00:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.957 14:00:03 -- scripts/common.sh@354 -- # echo 1 00:05:00.957 14:00:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:00.957 14:00:03 -- scripts/common.sh@365 -- # decimal 2 00:05:00.957 14:00:03 -- scripts/common.sh@352 -- # local d=2 00:05:00.957 14:00:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.957 14:00:03 -- scripts/common.sh@354 -- # echo 2 00:05:00.957 14:00:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:00.957 14:00:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:00.957 14:00:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:00.957 14:00:03 -- scripts/common.sh@367 -- # return 0 00:05:00.957 14:00:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.957 14:00:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:00.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.957 --rc genhtml_branch_coverage=1 00:05:00.957 --rc genhtml_function_coverage=1 00:05:00.957 --rc genhtml_legend=1 00:05:00.957 --rc geninfo_all_blocks=1 00:05:00.957 --rc geninfo_unexecuted_blocks=1 00:05:00.957 00:05:00.957 ' 00:05:00.957 14:00:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:00.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.957 --rc genhtml_branch_coverage=1 00:05:00.957 --rc genhtml_function_coverage=1 00:05:00.957 --rc genhtml_legend=1 00:05:00.957 --rc geninfo_all_blocks=1 00:05:00.957 --rc geninfo_unexecuted_blocks=1 00:05:00.957 00:05:00.957 ' 00:05:00.957 14:00:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:00.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.957 --rc genhtml_branch_coverage=1 00:05:00.957 --rc genhtml_function_coverage=1 00:05:00.957 --rc genhtml_legend=1 00:05:00.957 --rc geninfo_all_blocks=1 00:05:00.957 --rc geninfo_unexecuted_blocks=1 00:05:00.957 00:05:00.957 ' 00:05:00.957 14:00:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:00.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.957 --rc genhtml_branch_coverage=1 00:05:00.957 --rc genhtml_function_coverage=1 00:05:00.957 --rc genhtml_legend=1 00:05:00.957 --rc geninfo_all_blocks=1 00:05:00.957 --rc geninfo_unexecuted_blocks=1 00:05:00.957 00:05:00.957 ' 00:05:00.957 14:00:03 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:00.957 14:00:03 -- bdev/nbd_common.sh@6 -- # set -e 00:05:00.957 14:00:03 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:00.957 14:00:03 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:00.957 14:00:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:00.957 14:00:03 -- common/autotest_common.sh@10 -- # set +x 00:05:00.957 ************************************ 00:05:00.957 START TEST event_perf 00:05:00.957 ************************************ 00:05:00.957 14:00:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:00.957 Running I/O for 1 seconds...[2024-12-08 14:00:03.747971] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:00.957 [2024-12-08 14:00:03.748189] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56831 ] 00:05:01.217 [2024-12-08 14:00:03.894134] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:01.217 [2024-12-08 14:00:04.119675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.217 [2024-12-08 14:00:04.120049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:01.217 [2024-12-08 14:00:04.120256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:01.217 [2024-12-08 14:00:04.120367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.593 Running I/O for 1 seconds... 00:05:02.593 lcore 0: 202481 00:05:02.593 lcore 1: 202481 00:05:02.593 lcore 2: 202484 00:05:02.593 lcore 3: 202485 00:05:02.593 done. 00:05:02.593 00:05:02.593 real 0m1.630s 00:05:02.593 user 0m4.408s 00:05:02.593 sys 0m0.106s 00:05:02.593 14:00:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:02.593 ************************************ 00:05:02.593 14:00:05 -- common/autotest_common.sh@10 -- # set +x 00:05:02.593 END TEST event_perf 00:05:02.593 ************************************ 00:05:02.594 14:00:05 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:02.594 14:00:05 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:02.594 14:00:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:02.594 14:00:05 -- common/autotest_common.sh@10 -- # set +x 00:05:02.594 ************************************ 00:05:02.594 START TEST event_reactor 00:05:02.594 ************************************ 00:05:02.594 14:00:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:02.594 [2024-12-08 14:00:05.429613] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:02.594 [2024-12-08 14:00:05.429904] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56876 ] 00:05:02.852 [2024-12-08 14:00:05.580234] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.852 [2024-12-08 14:00:05.729668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.228 test_start 00:05:04.228 oneshot 00:05:04.228 tick 100 00:05:04.228 tick 100 00:05:04.228 tick 250 00:05:04.228 tick 100 00:05:04.228 tick 100 00:05:04.228 tick 250 00:05:04.228 tick 100 00:05:04.228 tick 500 00:05:04.228 tick 100 00:05:04.228 tick 100 00:05:04.228 tick 250 00:05:04.228 tick 100 00:05:04.228 tick 100 00:05:04.228 test_end 00:05:04.228 00:05:04.228 real 0m1.537s 00:05:04.228 user 0m1.347s 00:05:04.228 sys 0m0.081s 00:05:04.228 14:00:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:04.228 ************************************ 00:05:04.228 END TEST event_reactor 00:05:04.228 ************************************ 00:05:04.228 14:00:06 -- common/autotest_common.sh@10 -- # set +x 00:05:04.228 14:00:06 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:04.228 14:00:06 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:04.228 14:00:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:04.228 14:00:06 -- common/autotest_common.sh@10 -- # set +x 00:05:04.228 ************************************ 00:05:04.228 START TEST event_reactor_perf 00:05:04.228 ************************************ 00:05:04.228 14:00:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:04.228 [2024-12-08 14:00:07.009833] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:04.228 [2024-12-08 14:00:07.010123] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56907 ] 00:05:04.486 [2024-12-08 14:00:07.159577] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.486 [2024-12-08 14:00:07.298291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.869 test_start 00:05:05.869 test_end 00:05:05.870 Performance: 336019 events per second 00:05:05.870 ************************************ 00:05:05.870 END TEST event_reactor_perf 00:05:05.870 ************************************ 00:05:05.870 00:05:05.870 real 0m1.541s 00:05:05.870 user 0m1.352s 00:05:05.870 sys 0m0.078s 00:05:05.870 14:00:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:05.870 14:00:08 -- common/autotest_common.sh@10 -- # set +x 00:05:05.870 14:00:08 -- event/event.sh@49 -- # uname -s 00:05:05.870 14:00:08 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:05.870 14:00:08 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:05.870 14:00:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:05.870 14:00:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:05.870 14:00:08 -- common/autotest_common.sh@10 -- # set +x 00:05:05.870 ************************************ 00:05:05.870 START TEST event_scheduler 00:05:05.870 ************************************ 00:05:05.870 14:00:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:05.870 * Looking for test storage... 00:05:05.870 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:05.870 14:00:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:05.870 14:00:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:05.870 14:00:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:05.870 14:00:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:05.870 14:00:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:05.870 14:00:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:05.870 14:00:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:05.870 14:00:08 -- scripts/common.sh@335 -- # IFS=.-: 00:05:05.870 14:00:08 -- scripts/common.sh@335 -- # read -ra ver1 00:05:05.870 14:00:08 -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.870 14:00:08 -- scripts/common.sh@336 -- # read -ra ver2 00:05:05.870 14:00:08 -- scripts/common.sh@337 -- # local 'op=<' 00:05:05.870 14:00:08 -- scripts/common.sh@339 -- # ver1_l=2 00:05:05.870 14:00:08 -- scripts/common.sh@340 -- # ver2_l=1 00:05:05.870 14:00:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:05.870 14:00:08 -- scripts/common.sh@343 -- # case "$op" in 00:05:05.870 14:00:08 -- scripts/common.sh@344 -- # : 1 00:05:05.870 14:00:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:05.870 14:00:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.870 14:00:08 -- scripts/common.sh@364 -- # decimal 1 00:05:05.870 14:00:08 -- scripts/common.sh@352 -- # local d=1 00:05:05.870 14:00:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.870 14:00:08 -- scripts/common.sh@354 -- # echo 1 00:05:05.870 14:00:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:05.870 14:00:08 -- scripts/common.sh@365 -- # decimal 2 00:05:05.870 14:00:08 -- scripts/common.sh@352 -- # local d=2 00:05:05.870 14:00:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.870 14:00:08 -- scripts/common.sh@354 -- # echo 2 00:05:05.870 14:00:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:05.870 14:00:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:05.870 14:00:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:05.870 14:00:08 -- scripts/common.sh@367 -- # return 0 00:05:05.870 14:00:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.870 14:00:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:05.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.870 --rc genhtml_branch_coverage=1 00:05:05.870 --rc genhtml_function_coverage=1 00:05:05.870 --rc genhtml_legend=1 00:05:05.870 --rc geninfo_all_blocks=1 00:05:05.870 --rc geninfo_unexecuted_blocks=1 00:05:05.870 00:05:05.870 ' 00:05:05.870 14:00:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:05.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.870 --rc genhtml_branch_coverage=1 00:05:05.870 --rc genhtml_function_coverage=1 00:05:05.870 --rc genhtml_legend=1 00:05:05.870 --rc geninfo_all_blocks=1 00:05:05.870 --rc geninfo_unexecuted_blocks=1 00:05:05.870 00:05:05.870 ' 00:05:05.870 14:00:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:05.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.870 --rc genhtml_branch_coverage=1 00:05:05.870 --rc genhtml_function_coverage=1 00:05:05.870 --rc genhtml_legend=1 00:05:05.870 --rc geninfo_all_blocks=1 00:05:05.870 --rc geninfo_unexecuted_blocks=1 00:05:05.870 00:05:05.870 ' 00:05:05.870 14:00:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:05.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.870 --rc genhtml_branch_coverage=1 00:05:05.870 --rc genhtml_function_coverage=1 00:05:05.870 --rc genhtml_legend=1 00:05:05.870 --rc geninfo_all_blocks=1 00:05:05.870 --rc geninfo_unexecuted_blocks=1 00:05:05.870 00:05:05.870 ' 00:05:05.870 14:00:08 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:05.870 14:00:08 -- scheduler/scheduler.sh@35 -- # scheduler_pid=56982 00:05:05.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.870 14:00:08 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:05.870 14:00:08 -- scheduler/scheduler.sh@37 -- # waitforlisten 56982 00:05:05.870 14:00:08 -- common/autotest_common.sh@829 -- # '[' -z 56982 ']' 00:05:05.870 14:00:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.870 14:00:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:05.870 14:00:08 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:05.870 14:00:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.870 14:00:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:05.870 14:00:08 -- common/autotest_common.sh@10 -- # set +x 00:05:05.870 [2024-12-08 14:00:08.765039] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:05.870 [2024-12-08 14:00:08.765316] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56982 ] 00:05:06.132 [2024-12-08 14:00:08.911761] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:06.392 [2024-12-08 14:00:09.140118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.392 [2024-12-08 14:00:09.140263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.392 [2024-12-08 14:00:09.140621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:06.392 [2024-12-08 14:00:09.140627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:06.959 14:00:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:06.959 14:00:09 -- common/autotest_common.sh@862 -- # return 0 00:05:06.959 14:00:09 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:06.959 14:00:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.959 14:00:09 -- common/autotest_common.sh@10 -- # set +x 00:05:06.959 POWER: Env isn't set yet! 00:05:06.959 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:06.959 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:06.959 POWER: Cannot set governor of lcore 0 to userspace 00:05:06.959 POWER: Attempting to initialise PSTAT power management... 00:05:06.959 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:06.959 POWER: Cannot set governor of lcore 0 to performance 00:05:06.959 POWER: Attempting to initialise AMD PSTATE power management... 00:05:06.959 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:06.959 POWER: Cannot set governor of lcore 0 to userspace 00:05:06.959 POWER: Attempting to initialise CPPC power management... 00:05:06.959 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:06.959 POWER: Cannot set governor of lcore 0 to userspace 00:05:06.959 POWER: Attempting to initialise VM power management... 00:05:06.959 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:06.959 POWER: Unable to set Power Management Environment for lcore 0 00:05:06.959 [2024-12-08 14:00:09.590393] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:05:06.959 [2024-12-08 14:00:09.590407] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:05:06.959 [2024-12-08 14:00:09.590417] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:05:06.959 [2024-12-08 14:00:09.590431] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:06.959 [2024-12-08 14:00:09.590439] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:06.959 [2024-12-08 14:00:09.590446] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:06.959 14:00:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.959 14:00:09 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:06.959 14:00:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.959 14:00:09 -- common/autotest_common.sh@10 -- # set +x 00:05:06.959 [2024-12-08 14:00:09.830907] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:06.959 14:00:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.959 14:00:09 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:06.959 14:00:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:06.959 14:00:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:06.959 14:00:09 -- common/autotest_common.sh@10 -- # set +x 00:05:06.959 ************************************ 00:05:06.959 START TEST scheduler_create_thread 00:05:06.959 ************************************ 00:05:06.959 14:00:09 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:05:06.959 14:00:09 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:06.959 14:00:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.959 14:00:09 -- common/autotest_common.sh@10 -- # set +x 00:05:06.959 2 00:05:06.959 14:00:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.959 14:00:09 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:06.959 14:00:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.959 14:00:09 -- common/autotest_common.sh@10 -- # set +x 00:05:06.959 3 00:05:06.959 14:00:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.959 14:00:09 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:06.959 14:00:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.959 14:00:09 -- common/autotest_common.sh@10 -- # set +x 00:05:07.218 4 00:05:07.218 14:00:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.218 14:00:09 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:07.218 14:00:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.218 14:00:09 -- common/autotest_common.sh@10 -- # set +x 00:05:07.218 5 00:05:07.218 14:00:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.218 14:00:09 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:07.218 14:00:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.218 14:00:09 -- common/autotest_common.sh@10 -- # set +x 00:05:07.218 6 00:05:07.218 14:00:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.218 14:00:09 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:07.218 14:00:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.218 14:00:09 -- common/autotest_common.sh@10 -- # set +x 00:05:07.218 7 00:05:07.218 14:00:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.218 14:00:09 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:07.218 14:00:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.218 14:00:09 -- common/autotest_common.sh@10 -- # set +x 00:05:07.218 8 00:05:07.218 14:00:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.218 14:00:09 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:07.218 14:00:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.218 14:00:09 -- common/autotest_common.sh@10 -- # set +x 00:05:07.218 9 00:05:07.218 14:00:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.218 14:00:09 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:07.218 14:00:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.218 14:00:09 -- common/autotest_common.sh@10 -- # set +x 00:05:07.218 10 00:05:07.218 14:00:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.218 14:00:09 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:07.218 14:00:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.218 14:00:09 -- common/autotest_common.sh@10 -- # set +x 00:05:07.218 14:00:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.218 14:00:09 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:07.218 14:00:09 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:07.218 14:00:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.218 14:00:09 -- common/autotest_common.sh@10 -- # set +x 00:05:07.218 14:00:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.218 14:00:09 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:07.218 14:00:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.218 14:00:09 -- common/autotest_common.sh@10 -- # set +x 00:05:07.218 14:00:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.218 14:00:09 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:07.218 14:00:09 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:07.218 14:00:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.219 14:00:09 -- common/autotest_common.sh@10 -- # set +x 00:05:08.241 14:00:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.241 ************************************ 00:05:08.241 END TEST scheduler_create_thread 00:05:08.241 ************************************ 00:05:08.241 00:05:08.241 real 0m1.176s 00:05:08.241 user 0m0.010s 00:05:08.241 sys 0m0.007s 00:05:08.241 14:00:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:08.241 14:00:11 -- common/autotest_common.sh@10 -- # set +x 00:05:08.241 14:00:11 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:08.241 14:00:11 -- scheduler/scheduler.sh@46 -- # killprocess 56982 00:05:08.241 14:00:11 -- common/autotest_common.sh@936 -- # '[' -z 56982 ']' 00:05:08.241 14:00:11 -- common/autotest_common.sh@940 -- # kill -0 56982 00:05:08.241 14:00:11 -- common/autotest_common.sh@941 -- # uname 00:05:08.241 14:00:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:08.241 14:00:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56982 00:05:08.241 killing process with pid 56982 00:05:08.241 14:00:11 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:08.241 14:00:11 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:08.241 14:00:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56982' 00:05:08.241 14:00:11 -- common/autotest_common.sh@955 -- # kill 56982 00:05:08.241 14:00:11 -- common/autotest_common.sh@960 -- # wait 56982 00:05:08.822 [2024-12-08 14:00:11.507753] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:09.389 00:05:09.389 real 0m3.557s 00:05:09.389 user 0m5.305s 00:05:09.389 sys 0m0.372s 00:05:09.389 14:00:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:09.389 ************************************ 00:05:09.389 END TEST event_scheduler 00:05:09.389 14:00:12 -- common/autotest_common.sh@10 -- # set +x 00:05:09.389 ************************************ 00:05:09.389 14:00:12 -- event/event.sh@51 -- # modprobe -n nbd 00:05:09.389 14:00:12 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:09.389 14:00:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:09.389 14:00:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:09.389 14:00:12 -- common/autotest_common.sh@10 -- # set +x 00:05:09.389 ************************************ 00:05:09.389 START TEST app_repeat 00:05:09.389 ************************************ 00:05:09.389 14:00:12 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:05:09.389 14:00:12 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.389 14:00:12 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.389 14:00:12 -- event/event.sh@13 -- # local nbd_list 00:05:09.389 14:00:12 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:09.389 14:00:12 -- event/event.sh@14 -- # local bdev_list 00:05:09.389 14:00:12 -- event/event.sh@15 -- # local repeat_times=4 00:05:09.389 14:00:12 -- event/event.sh@17 -- # modprobe nbd 00:05:09.389 Process app_repeat pid: 57076 00:05:09.389 spdk_app_start Round 0 00:05:09.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:09.389 14:00:12 -- event/event.sh@19 -- # repeat_pid=57076 00:05:09.389 14:00:12 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:09.389 14:00:12 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 57076' 00:05:09.389 14:00:12 -- event/event.sh@23 -- # for i in {0..2} 00:05:09.389 14:00:12 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:09.390 14:00:12 -- event/event.sh@25 -- # waitforlisten 57076 /var/tmp/spdk-nbd.sock 00:05:09.390 14:00:12 -- common/autotest_common.sh@829 -- # '[' -z 57076 ']' 00:05:09.390 14:00:12 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:09.390 14:00:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:09.390 14:00:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:09.390 14:00:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:09.390 14:00:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:09.390 14:00:12 -- common/autotest_common.sh@10 -- # set +x 00:05:09.390 [2024-12-08 14:00:12.215390] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:09.390 [2024-12-08 14:00:12.215493] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57076 ] 00:05:09.651 [2024-12-08 14:00:12.366472] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:09.911 [2024-12-08 14:00:12.593728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.911 [2024-12-08 14:00:12.593855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.172 14:00:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:10.172 14:00:13 -- common/autotest_common.sh@862 -- # return 0 00:05:10.172 14:00:13 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:10.434 Malloc0 00:05:10.434 14:00:13 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:10.695 Malloc1 00:05:10.695 14:00:13 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:10.695 14:00:13 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.695 14:00:13 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:10.695 14:00:13 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:10.695 14:00:13 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.695 14:00:13 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:10.695 14:00:13 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:10.696 14:00:13 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.696 14:00:13 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:10.696 14:00:13 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:10.696 14:00:13 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.696 14:00:13 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:10.696 14:00:13 -- bdev/nbd_common.sh@12 -- # local i 00:05:10.696 14:00:13 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:10.696 14:00:13 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.696 14:00:13 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:10.956 /dev/nbd0 00:05:10.956 14:00:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:10.956 14:00:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:10.956 14:00:13 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:10.956 14:00:13 -- common/autotest_common.sh@867 -- # local i 00:05:10.956 14:00:13 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:10.956 14:00:13 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:10.956 14:00:13 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:10.956 14:00:13 -- common/autotest_common.sh@871 -- # break 00:05:10.956 14:00:13 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:10.956 14:00:13 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:10.956 14:00:13 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:10.956 1+0 records in 00:05:10.956 1+0 records out 00:05:10.956 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000482398 s, 8.5 MB/s 00:05:10.956 14:00:13 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:10.956 14:00:13 -- common/autotest_common.sh@884 -- # size=4096 00:05:10.956 14:00:13 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:10.956 14:00:13 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:10.956 14:00:13 -- common/autotest_common.sh@887 -- # return 0 00:05:10.956 14:00:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:10.956 14:00:13 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.956 14:00:13 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:11.218 /dev/nbd1 00:05:11.218 14:00:14 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:11.218 14:00:14 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:11.218 14:00:14 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:11.218 14:00:14 -- common/autotest_common.sh@867 -- # local i 00:05:11.218 14:00:14 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:11.218 14:00:14 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:11.218 14:00:14 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:11.218 14:00:14 -- common/autotest_common.sh@871 -- # break 00:05:11.218 14:00:14 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:11.218 14:00:14 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:11.218 14:00:14 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:11.218 1+0 records in 00:05:11.218 1+0 records out 00:05:11.218 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00057635 s, 7.1 MB/s 00:05:11.218 14:00:14 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:11.218 14:00:14 -- common/autotest_common.sh@884 -- # size=4096 00:05:11.218 14:00:14 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:11.218 14:00:14 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:11.218 14:00:14 -- common/autotest_common.sh@887 -- # return 0 00:05:11.218 14:00:14 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:11.218 14:00:14 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.218 14:00:14 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:11.218 14:00:14 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.218 14:00:14 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:11.478 14:00:14 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:11.478 { 00:05:11.478 "nbd_device": "/dev/nbd0", 00:05:11.478 "bdev_name": "Malloc0" 00:05:11.478 }, 00:05:11.478 { 00:05:11.478 "nbd_device": "/dev/nbd1", 00:05:11.478 "bdev_name": "Malloc1" 00:05:11.478 } 00:05:11.478 ]' 00:05:11.478 14:00:14 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:11.478 14:00:14 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:11.478 { 00:05:11.478 "nbd_device": "/dev/nbd0", 00:05:11.478 "bdev_name": "Malloc0" 00:05:11.478 }, 00:05:11.478 { 00:05:11.478 "nbd_device": "/dev/nbd1", 00:05:11.478 "bdev_name": "Malloc1" 00:05:11.478 } 00:05:11.478 ]' 00:05:11.478 14:00:14 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:11.478 /dev/nbd1' 00:05:11.478 14:00:14 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:11.478 /dev/nbd1' 00:05:11.478 14:00:14 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:11.478 14:00:14 -- bdev/nbd_common.sh@65 -- # count=2 00:05:11.478 14:00:14 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:11.478 14:00:14 -- bdev/nbd_common.sh@95 -- # count=2 00:05:11.478 14:00:14 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:11.478 14:00:14 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:11.478 14:00:14 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.478 14:00:14 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:11.478 14:00:14 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:11.478 14:00:14 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:11.478 14:00:14 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:11.478 14:00:14 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:11.479 256+0 records in 00:05:11.479 256+0 records out 00:05:11.479 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00759626 s, 138 MB/s 00:05:11.479 14:00:14 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:11.479 14:00:14 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:11.479 256+0 records in 00:05:11.479 256+0 records out 00:05:11.479 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0252254 s, 41.6 MB/s 00:05:11.479 14:00:14 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:11.479 14:00:14 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:11.739 256+0 records in 00:05:11.739 256+0 records out 00:05:11.739 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0413511 s, 25.4 MB/s 00:05:11.739 14:00:14 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:11.739 14:00:14 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.739 14:00:14 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:11.739 14:00:14 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:11.739 14:00:14 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:11.739 14:00:14 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:11.739 14:00:14 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:11.740 14:00:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:11.740 14:00:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:11.740 14:00:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:11.740 14:00:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:11.740 14:00:14 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:11.740 14:00:14 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:11.740 14:00:14 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.740 14:00:14 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.740 14:00:14 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:11.740 14:00:14 -- bdev/nbd_common.sh@51 -- # local i 00:05:11.740 14:00:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:11.740 14:00:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:11.740 14:00:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:11.740 14:00:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:11.740 14:00:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:11.740 14:00:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:11.740 14:00:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:11.740 14:00:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:11.740 14:00:14 -- bdev/nbd_common.sh@41 -- # break 00:05:11.740 14:00:14 -- bdev/nbd_common.sh@45 -- # return 0 00:05:11.740 14:00:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:11.740 14:00:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:12.002 14:00:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:12.002 14:00:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:12.002 14:00:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:12.002 14:00:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:12.002 14:00:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:12.002 14:00:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:12.002 14:00:14 -- bdev/nbd_common.sh@41 -- # break 00:05:12.002 14:00:14 -- bdev/nbd_common.sh@45 -- # return 0 00:05:12.002 14:00:14 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:12.002 14:00:14 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.002 14:00:14 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:12.260 14:00:15 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:12.260 14:00:15 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:12.260 14:00:15 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:12.260 14:00:15 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:12.260 14:00:15 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:12.260 14:00:15 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:12.260 14:00:15 -- bdev/nbd_common.sh@65 -- # true 00:05:12.260 14:00:15 -- bdev/nbd_common.sh@65 -- # count=0 00:05:12.260 14:00:15 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:12.260 14:00:15 -- bdev/nbd_common.sh@104 -- # count=0 00:05:12.260 14:00:15 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:12.260 14:00:15 -- bdev/nbd_common.sh@109 -- # return 0 00:05:12.260 14:00:15 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:12.518 14:00:15 -- event/event.sh@35 -- # sleep 3 00:05:13.452 [2024-12-08 14:00:16.036854] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:13.452 [2024-12-08 14:00:16.174202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.452 [2024-12-08 14:00:16.174408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.452 [2024-12-08 14:00:16.278277] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:13.452 [2024-12-08 14:00:16.278321] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:15.978 spdk_app_start Round 1 00:05:15.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:15.978 14:00:18 -- event/event.sh@23 -- # for i in {0..2} 00:05:15.978 14:00:18 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:15.978 14:00:18 -- event/event.sh@25 -- # waitforlisten 57076 /var/tmp/spdk-nbd.sock 00:05:15.978 14:00:18 -- common/autotest_common.sh@829 -- # '[' -z 57076 ']' 00:05:15.978 14:00:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:15.978 14:00:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:15.978 14:00:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:15.978 14:00:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:15.978 14:00:18 -- common/autotest_common.sh@10 -- # set +x 00:05:15.978 14:00:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:15.978 14:00:18 -- common/autotest_common.sh@862 -- # return 0 00:05:15.978 14:00:18 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:15.978 Malloc0 00:05:15.978 14:00:18 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:16.236 Malloc1 00:05:16.236 14:00:19 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:16.236 14:00:19 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.236 14:00:19 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:16.236 14:00:19 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:16.236 14:00:19 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.236 14:00:19 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:16.236 14:00:19 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:16.236 14:00:19 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.236 14:00:19 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:16.236 14:00:19 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:16.236 14:00:19 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.237 14:00:19 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:16.237 14:00:19 -- bdev/nbd_common.sh@12 -- # local i 00:05:16.237 14:00:19 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:16.237 14:00:19 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:16.237 14:00:19 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:16.494 /dev/nbd0 00:05:16.494 14:00:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:16.494 14:00:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:16.494 14:00:19 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:16.494 14:00:19 -- common/autotest_common.sh@867 -- # local i 00:05:16.494 14:00:19 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:16.494 14:00:19 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:16.494 14:00:19 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:16.494 14:00:19 -- common/autotest_common.sh@871 -- # break 00:05:16.494 14:00:19 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:16.494 14:00:19 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:16.494 14:00:19 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:16.494 1+0 records in 00:05:16.494 1+0 records out 00:05:16.494 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00014233 s, 28.8 MB/s 00:05:16.494 14:00:19 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:16.494 14:00:19 -- common/autotest_common.sh@884 -- # size=4096 00:05:16.494 14:00:19 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:16.494 14:00:19 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:16.494 14:00:19 -- common/autotest_common.sh@887 -- # return 0 00:05:16.494 14:00:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:16.494 14:00:19 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:16.494 14:00:19 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:16.755 /dev/nbd1 00:05:16.755 14:00:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:16.755 14:00:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:16.755 14:00:19 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:16.755 14:00:19 -- common/autotest_common.sh@867 -- # local i 00:05:16.755 14:00:19 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:16.755 14:00:19 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:16.755 14:00:19 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:16.755 14:00:19 -- common/autotest_common.sh@871 -- # break 00:05:16.755 14:00:19 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:16.755 14:00:19 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:16.755 14:00:19 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:16.755 1+0 records in 00:05:16.755 1+0 records out 00:05:16.755 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000432838 s, 9.5 MB/s 00:05:16.755 14:00:19 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:16.755 14:00:19 -- common/autotest_common.sh@884 -- # size=4096 00:05:16.755 14:00:19 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:16.755 14:00:19 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:16.755 14:00:19 -- common/autotest_common.sh@887 -- # return 0 00:05:16.755 14:00:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:16.755 14:00:19 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:16.755 14:00:19 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:16.755 14:00:19 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.755 14:00:19 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:16.755 14:00:19 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:16.755 { 00:05:16.755 "nbd_device": "/dev/nbd0", 00:05:16.755 "bdev_name": "Malloc0" 00:05:16.755 }, 00:05:16.755 { 00:05:16.755 "nbd_device": "/dev/nbd1", 00:05:16.755 "bdev_name": "Malloc1" 00:05:16.755 } 00:05:16.755 ]' 00:05:16.755 14:00:19 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:16.755 { 00:05:16.755 "nbd_device": "/dev/nbd0", 00:05:16.755 "bdev_name": "Malloc0" 00:05:16.755 }, 00:05:16.755 { 00:05:16.755 "nbd_device": "/dev/nbd1", 00:05:16.755 "bdev_name": "Malloc1" 00:05:16.755 } 00:05:16.755 ]' 00:05:16.755 14:00:19 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:17.016 14:00:19 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:17.016 /dev/nbd1' 00:05:17.016 14:00:19 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:17.016 /dev/nbd1' 00:05:17.016 14:00:19 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:17.016 14:00:19 -- bdev/nbd_common.sh@65 -- # count=2 00:05:17.016 14:00:19 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:17.017 14:00:19 -- bdev/nbd_common.sh@95 -- # count=2 00:05:17.017 14:00:19 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:17.017 14:00:19 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:17.017 14:00:19 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.017 14:00:19 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:17.017 14:00:19 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:17.017 14:00:19 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:17.017 14:00:19 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:17.017 14:00:19 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:17.017 256+0 records in 00:05:17.017 256+0 records out 00:05:17.017 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00722427 s, 145 MB/s 00:05:17.017 14:00:19 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:17.017 14:00:19 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:17.017 256+0 records in 00:05:17.017 256+0 records out 00:05:17.017 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.019837 s, 52.9 MB/s 00:05:17.017 14:00:19 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:17.017 14:00:19 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:17.017 256+0 records in 00:05:17.017 256+0 records out 00:05:17.017 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0244389 s, 42.9 MB/s 00:05:17.017 14:00:19 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:17.017 14:00:19 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.017 14:00:19 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:17.017 14:00:19 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:17.017 14:00:19 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:17.017 14:00:19 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:17.017 14:00:19 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:17.017 14:00:19 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:17.017 14:00:19 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:17.017 14:00:19 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:17.017 14:00:19 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:17.017 14:00:19 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:17.017 14:00:19 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:17.017 14:00:19 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.017 14:00:19 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.017 14:00:19 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:17.017 14:00:19 -- bdev/nbd_common.sh@51 -- # local i 00:05:17.017 14:00:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:17.017 14:00:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:17.278 14:00:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:17.278 14:00:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:17.278 14:00:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:17.278 14:00:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:17.278 14:00:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:17.278 14:00:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:17.278 14:00:19 -- bdev/nbd_common.sh@41 -- # break 00:05:17.278 14:00:19 -- bdev/nbd_common.sh@45 -- # return 0 00:05:17.278 14:00:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:17.278 14:00:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:17.278 14:00:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:17.278 14:00:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:17.278 14:00:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:17.278 14:00:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:17.278 14:00:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:17.278 14:00:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:17.278 14:00:20 -- bdev/nbd_common.sh@41 -- # break 00:05:17.278 14:00:20 -- bdev/nbd_common.sh@45 -- # return 0 00:05:17.278 14:00:20 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:17.278 14:00:20 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.278 14:00:20 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:17.538 14:00:20 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:17.538 14:00:20 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:17.538 14:00:20 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:17.538 14:00:20 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:17.538 14:00:20 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:17.538 14:00:20 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:17.538 14:00:20 -- bdev/nbd_common.sh@65 -- # true 00:05:17.538 14:00:20 -- bdev/nbd_common.sh@65 -- # count=0 00:05:17.538 14:00:20 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:17.538 14:00:20 -- bdev/nbd_common.sh@104 -- # count=0 00:05:17.538 14:00:20 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:17.538 14:00:20 -- bdev/nbd_common.sh@109 -- # return 0 00:05:17.538 14:00:20 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:17.800 14:00:20 -- event/event.sh@35 -- # sleep 3 00:05:18.740 [2024-12-08 14:00:21.638290] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:19.001 [2024-12-08 14:00:21.826321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.001 [2024-12-08 14:00:21.826407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.264 [2024-12-08 14:00:21.942282] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:19.264 [2024-12-08 14:00:21.942340] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:21.173 spdk_app_start Round 2 00:05:21.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:21.173 14:00:23 -- event/event.sh@23 -- # for i in {0..2} 00:05:21.173 14:00:23 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:21.173 14:00:23 -- event/event.sh@25 -- # waitforlisten 57076 /var/tmp/spdk-nbd.sock 00:05:21.173 14:00:23 -- common/autotest_common.sh@829 -- # '[' -z 57076 ']' 00:05:21.173 14:00:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:21.173 14:00:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:21.173 14:00:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:21.173 14:00:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:21.173 14:00:23 -- common/autotest_common.sh@10 -- # set +x 00:05:21.173 14:00:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:21.173 14:00:23 -- common/autotest_common.sh@862 -- # return 0 00:05:21.173 14:00:23 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:21.431 Malloc0 00:05:21.431 14:00:24 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:21.431 Malloc1 00:05:21.431 14:00:24 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:21.431 14:00:24 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.431 14:00:24 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:21.431 14:00:24 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:21.431 14:00:24 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.431 14:00:24 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:21.431 14:00:24 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:21.431 14:00:24 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.431 14:00:24 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:21.431 14:00:24 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:21.431 14:00:24 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.431 14:00:24 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:21.431 14:00:24 -- bdev/nbd_common.sh@12 -- # local i 00:05:21.431 14:00:24 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:21.431 14:00:24 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.431 14:00:24 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:21.690 /dev/nbd0 00:05:21.690 14:00:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:21.690 14:00:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:21.690 14:00:24 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:21.690 14:00:24 -- common/autotest_common.sh@867 -- # local i 00:05:21.690 14:00:24 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:21.690 14:00:24 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:21.690 14:00:24 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:21.690 14:00:24 -- common/autotest_common.sh@871 -- # break 00:05:21.690 14:00:24 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:21.690 14:00:24 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:21.690 14:00:24 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:21.690 1+0 records in 00:05:21.690 1+0 records out 00:05:21.690 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000249978 s, 16.4 MB/s 00:05:21.690 14:00:24 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:21.690 14:00:24 -- common/autotest_common.sh@884 -- # size=4096 00:05:21.690 14:00:24 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:21.690 14:00:24 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:21.690 14:00:24 -- common/autotest_common.sh@887 -- # return 0 00:05:21.690 14:00:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:21.690 14:00:24 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.690 14:00:24 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:21.947 /dev/nbd1 00:05:21.947 14:00:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:21.947 14:00:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:21.947 14:00:24 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:21.947 14:00:24 -- common/autotest_common.sh@867 -- # local i 00:05:21.947 14:00:24 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:21.947 14:00:24 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:21.947 14:00:24 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:21.947 14:00:24 -- common/autotest_common.sh@871 -- # break 00:05:21.947 14:00:24 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:21.947 14:00:24 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:21.947 14:00:24 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:21.947 1+0 records in 00:05:21.947 1+0 records out 00:05:21.947 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00017135 s, 23.9 MB/s 00:05:21.947 14:00:24 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:21.947 14:00:24 -- common/autotest_common.sh@884 -- # size=4096 00:05:21.947 14:00:24 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:21.947 14:00:24 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:21.947 14:00:24 -- common/autotest_common.sh@887 -- # return 0 00:05:21.947 14:00:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:21.947 14:00:24 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.947 14:00:24 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:21.947 14:00:24 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.948 14:00:24 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:22.208 14:00:24 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:22.208 { 00:05:22.208 "nbd_device": "/dev/nbd0", 00:05:22.208 "bdev_name": "Malloc0" 00:05:22.208 }, 00:05:22.208 { 00:05:22.208 "nbd_device": "/dev/nbd1", 00:05:22.208 "bdev_name": "Malloc1" 00:05:22.208 } 00:05:22.208 ]' 00:05:22.208 14:00:24 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:22.208 14:00:24 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:22.208 { 00:05:22.208 "nbd_device": "/dev/nbd0", 00:05:22.208 "bdev_name": "Malloc0" 00:05:22.208 }, 00:05:22.208 { 00:05:22.208 "nbd_device": "/dev/nbd1", 00:05:22.208 "bdev_name": "Malloc1" 00:05:22.208 } 00:05:22.208 ]' 00:05:22.208 14:00:25 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:22.208 /dev/nbd1' 00:05:22.208 14:00:25 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:22.208 /dev/nbd1' 00:05:22.208 14:00:25 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:22.208 14:00:25 -- bdev/nbd_common.sh@65 -- # count=2 00:05:22.208 14:00:25 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:22.208 14:00:25 -- bdev/nbd_common.sh@95 -- # count=2 00:05:22.208 14:00:25 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:22.208 14:00:25 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:22.208 14:00:25 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.208 14:00:25 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:22.208 14:00:25 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:22.208 14:00:25 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:22.208 14:00:25 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:22.208 14:00:25 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:22.208 256+0 records in 00:05:22.208 256+0 records out 00:05:22.208 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0070187 s, 149 MB/s 00:05:22.208 14:00:25 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:22.208 14:00:25 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:22.208 256+0 records in 00:05:22.208 256+0 records out 00:05:22.208 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0199307 s, 52.6 MB/s 00:05:22.208 14:00:25 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:22.208 14:00:25 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:22.208 256+0 records in 00:05:22.208 256+0 records out 00:05:22.208 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0223673 s, 46.9 MB/s 00:05:22.208 14:00:25 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:22.208 14:00:25 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.208 14:00:25 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:22.208 14:00:25 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:22.208 14:00:25 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:22.208 14:00:25 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:22.208 14:00:25 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:22.208 14:00:25 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:22.208 14:00:25 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:22.208 14:00:25 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:22.208 14:00:25 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:22.208 14:00:25 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:22.208 14:00:25 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:22.208 14:00:25 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.208 14:00:25 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.208 14:00:25 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:22.208 14:00:25 -- bdev/nbd_common.sh@51 -- # local i 00:05:22.208 14:00:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:22.208 14:00:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:22.466 14:00:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:22.466 14:00:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:22.466 14:00:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:22.466 14:00:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:22.466 14:00:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:22.466 14:00:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:22.466 14:00:25 -- bdev/nbd_common.sh@41 -- # break 00:05:22.466 14:00:25 -- bdev/nbd_common.sh@45 -- # return 0 00:05:22.466 14:00:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:22.466 14:00:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:22.723 14:00:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:22.723 14:00:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:22.723 14:00:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:22.723 14:00:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:22.723 14:00:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:22.723 14:00:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:22.723 14:00:25 -- bdev/nbd_common.sh@41 -- # break 00:05:22.723 14:00:25 -- bdev/nbd_common.sh@45 -- # return 0 00:05:22.723 14:00:25 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:22.723 14:00:25 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.723 14:00:25 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:22.980 14:00:25 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:22.980 14:00:25 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:22.980 14:00:25 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:22.980 14:00:25 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:22.980 14:00:25 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:22.980 14:00:25 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:22.980 14:00:25 -- bdev/nbd_common.sh@65 -- # true 00:05:22.980 14:00:25 -- bdev/nbd_common.sh@65 -- # count=0 00:05:22.980 14:00:25 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:22.980 14:00:25 -- bdev/nbd_common.sh@104 -- # count=0 00:05:22.980 14:00:25 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:22.980 14:00:25 -- bdev/nbd_common.sh@109 -- # return 0 00:05:22.980 14:00:25 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:23.237 14:00:25 -- event/event.sh@35 -- # sleep 3 00:05:23.804 [2024-12-08 14:00:26.647256] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:24.062 [2024-12-08 14:00:26.794144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.062 [2024-12-08 14:00:26.794174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.062 [2024-12-08 14:00:26.909526] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:24.062 [2024-12-08 14:00:26.909592] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:26.594 14:00:28 -- event/event.sh@38 -- # waitforlisten 57076 /var/tmp/spdk-nbd.sock 00:05:26.594 14:00:28 -- common/autotest_common.sh@829 -- # '[' -z 57076 ']' 00:05:26.594 14:00:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:26.594 14:00:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:26.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:26.594 14:00:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:26.594 14:00:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:26.594 14:00:28 -- common/autotest_common.sh@10 -- # set +x 00:05:26.594 14:00:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:26.594 14:00:29 -- common/autotest_common.sh@862 -- # return 0 00:05:26.594 14:00:29 -- event/event.sh@39 -- # killprocess 57076 00:05:26.594 14:00:29 -- common/autotest_common.sh@936 -- # '[' -z 57076 ']' 00:05:26.594 14:00:29 -- common/autotest_common.sh@940 -- # kill -0 57076 00:05:26.594 14:00:29 -- common/autotest_common.sh@941 -- # uname 00:05:26.594 14:00:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:26.594 14:00:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57076 00:05:26.594 14:00:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:26.594 14:00:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:26.594 killing process with pid 57076 00:05:26.594 14:00:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57076' 00:05:26.594 14:00:29 -- common/autotest_common.sh@955 -- # kill 57076 00:05:26.594 14:00:29 -- common/autotest_common.sh@960 -- # wait 57076 00:05:27.161 spdk_app_start is called in Round 0. 00:05:27.161 Shutdown signal received, stop current app iteration 00:05:27.161 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:27.161 spdk_app_start is called in Round 1. 00:05:27.161 Shutdown signal received, stop current app iteration 00:05:27.161 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:27.161 spdk_app_start is called in Round 2. 00:05:27.161 Shutdown signal received, stop current app iteration 00:05:27.161 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:27.161 spdk_app_start is called in Round 3. 00:05:27.161 Shutdown signal received, stop current app iteration 00:05:27.161 14:00:29 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:27.161 14:00:29 -- event/event.sh@42 -- # return 0 00:05:27.161 00:05:27.161 real 0m17.787s 00:05:27.161 user 0m37.548s 00:05:27.161 sys 0m2.277s 00:05:27.161 14:00:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:27.161 14:00:29 -- common/autotest_common.sh@10 -- # set +x 00:05:27.161 ************************************ 00:05:27.161 END TEST app_repeat 00:05:27.161 ************************************ 00:05:27.161 14:00:29 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:27.161 14:00:29 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:27.161 14:00:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:27.161 14:00:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:27.161 14:00:29 -- common/autotest_common.sh@10 -- # set +x 00:05:27.161 ************************************ 00:05:27.161 START TEST cpu_locks 00:05:27.161 ************************************ 00:05:27.161 14:00:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:27.161 * Looking for test storage... 00:05:27.161 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:27.161 14:00:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:27.161 14:00:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:27.161 14:00:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:27.421 14:00:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:27.421 14:00:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:27.421 14:00:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:27.421 14:00:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:27.421 14:00:30 -- scripts/common.sh@335 -- # IFS=.-: 00:05:27.421 14:00:30 -- scripts/common.sh@335 -- # read -ra ver1 00:05:27.421 14:00:30 -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.421 14:00:30 -- scripts/common.sh@336 -- # read -ra ver2 00:05:27.421 14:00:30 -- scripts/common.sh@337 -- # local 'op=<' 00:05:27.421 14:00:30 -- scripts/common.sh@339 -- # ver1_l=2 00:05:27.421 14:00:30 -- scripts/common.sh@340 -- # ver2_l=1 00:05:27.421 14:00:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:27.421 14:00:30 -- scripts/common.sh@343 -- # case "$op" in 00:05:27.421 14:00:30 -- scripts/common.sh@344 -- # : 1 00:05:27.421 14:00:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:27.421 14:00:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:27.421 14:00:30 -- scripts/common.sh@364 -- # decimal 1 00:05:27.421 14:00:30 -- scripts/common.sh@352 -- # local d=1 00:05:27.421 14:00:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:27.421 14:00:30 -- scripts/common.sh@354 -- # echo 1 00:05:27.421 14:00:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:27.421 14:00:30 -- scripts/common.sh@365 -- # decimal 2 00:05:27.421 14:00:30 -- scripts/common.sh@352 -- # local d=2 00:05:27.421 14:00:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:27.421 14:00:30 -- scripts/common.sh@354 -- # echo 2 00:05:27.421 14:00:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:27.421 14:00:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:27.421 14:00:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:27.421 14:00:30 -- scripts/common.sh@367 -- # return 0 00:05:27.421 14:00:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:27.421 14:00:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:27.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.421 --rc genhtml_branch_coverage=1 00:05:27.421 --rc genhtml_function_coverage=1 00:05:27.421 --rc genhtml_legend=1 00:05:27.421 --rc geninfo_all_blocks=1 00:05:27.421 --rc geninfo_unexecuted_blocks=1 00:05:27.421 00:05:27.421 ' 00:05:27.421 14:00:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:27.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.421 --rc genhtml_branch_coverage=1 00:05:27.421 --rc genhtml_function_coverage=1 00:05:27.421 --rc genhtml_legend=1 00:05:27.421 --rc geninfo_all_blocks=1 00:05:27.421 --rc geninfo_unexecuted_blocks=1 00:05:27.421 00:05:27.421 ' 00:05:27.421 14:00:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:27.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.421 --rc genhtml_branch_coverage=1 00:05:27.421 --rc genhtml_function_coverage=1 00:05:27.421 --rc genhtml_legend=1 00:05:27.421 --rc geninfo_all_blocks=1 00:05:27.421 --rc geninfo_unexecuted_blocks=1 00:05:27.421 00:05:27.421 ' 00:05:27.421 14:00:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:27.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.421 --rc genhtml_branch_coverage=1 00:05:27.421 --rc genhtml_function_coverage=1 00:05:27.421 --rc genhtml_legend=1 00:05:27.421 --rc geninfo_all_blocks=1 00:05:27.421 --rc geninfo_unexecuted_blocks=1 00:05:27.421 00:05:27.421 ' 00:05:27.421 14:00:30 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:27.421 14:00:30 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:27.421 14:00:30 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:27.422 14:00:30 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:27.422 14:00:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:27.422 14:00:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:27.422 14:00:30 -- common/autotest_common.sh@10 -- # set +x 00:05:27.422 ************************************ 00:05:27.422 START TEST default_locks 00:05:27.422 ************************************ 00:05:27.422 14:00:30 -- common/autotest_common.sh@1114 -- # default_locks 00:05:27.422 14:00:30 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=57502 00:05:27.422 14:00:30 -- event/cpu_locks.sh@47 -- # waitforlisten 57502 00:05:27.422 14:00:30 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:27.422 14:00:30 -- common/autotest_common.sh@829 -- # '[' -z 57502 ']' 00:05:27.422 14:00:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.422 14:00:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:27.422 14:00:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.422 14:00:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:27.422 14:00:30 -- common/autotest_common.sh@10 -- # set +x 00:05:27.422 [2024-12-08 14:00:30.254318] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:27.422 [2024-12-08 14:00:30.254677] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57502 ] 00:05:27.680 [2024-12-08 14:00:30.419109] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.680 [2024-12-08 14:00:30.595808] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:27.680 [2024-12-08 14:00:30.596175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.054 14:00:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:29.054 14:00:31 -- common/autotest_common.sh@862 -- # return 0 00:05:29.054 14:00:31 -- event/cpu_locks.sh@49 -- # locks_exist 57502 00:05:29.054 14:00:31 -- event/cpu_locks.sh@22 -- # lslocks -p 57502 00:05:29.054 14:00:31 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:29.311 14:00:31 -- event/cpu_locks.sh@50 -- # killprocess 57502 00:05:29.311 14:00:31 -- common/autotest_common.sh@936 -- # '[' -z 57502 ']' 00:05:29.311 14:00:31 -- common/autotest_common.sh@940 -- # kill -0 57502 00:05:29.311 14:00:31 -- common/autotest_common.sh@941 -- # uname 00:05:29.311 14:00:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:29.311 14:00:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57502 00:05:29.311 14:00:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:29.311 14:00:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:29.311 14:00:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57502' 00:05:29.311 killing process with pid 57502 00:05:29.311 14:00:32 -- common/autotest_common.sh@955 -- # kill 57502 00:05:29.311 14:00:32 -- common/autotest_common.sh@960 -- # wait 57502 00:05:30.682 14:00:33 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 57502 00:05:30.682 14:00:33 -- common/autotest_common.sh@650 -- # local es=0 00:05:30.682 14:00:33 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 57502 00:05:30.682 14:00:33 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:30.682 14:00:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:30.682 14:00:33 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:30.682 14:00:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:30.682 14:00:33 -- common/autotest_common.sh@653 -- # waitforlisten 57502 00:05:30.682 14:00:33 -- common/autotest_common.sh@829 -- # '[' -z 57502 ']' 00:05:30.682 14:00:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.682 14:00:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:30.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.682 14:00:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.682 14:00:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:30.682 14:00:33 -- common/autotest_common.sh@10 -- # set +x 00:05:30.682 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (57502) - No such process 00:05:30.682 ERROR: process (pid: 57502) is no longer running 00:05:30.682 14:00:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:30.682 14:00:33 -- common/autotest_common.sh@862 -- # return 1 00:05:30.682 14:00:33 -- common/autotest_common.sh@653 -- # es=1 00:05:30.682 14:00:33 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:30.682 14:00:33 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:30.682 14:00:33 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:30.682 14:00:33 -- event/cpu_locks.sh@54 -- # no_locks 00:05:30.682 14:00:33 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:30.682 14:00:33 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:30.682 14:00:33 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:30.682 00:05:30.682 real 0m3.269s 00:05:30.682 user 0m3.441s 00:05:30.682 sys 0m0.481s 00:05:30.682 14:00:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:30.682 ************************************ 00:05:30.682 END TEST default_locks 00:05:30.682 14:00:33 -- common/autotest_common.sh@10 -- # set +x 00:05:30.682 ************************************ 00:05:30.682 14:00:33 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:30.682 14:00:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:30.682 14:00:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:30.682 14:00:33 -- common/autotest_common.sh@10 -- # set +x 00:05:30.682 ************************************ 00:05:30.682 START TEST default_locks_via_rpc 00:05:30.682 ************************************ 00:05:30.682 14:00:33 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:05:30.682 14:00:33 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=57568 00:05:30.682 14:00:33 -- event/cpu_locks.sh@63 -- # waitforlisten 57568 00:05:30.682 14:00:33 -- common/autotest_common.sh@829 -- # '[' -z 57568 ']' 00:05:30.682 14:00:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.682 14:00:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:30.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.682 14:00:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.682 14:00:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:30.682 14:00:33 -- common/autotest_common.sh@10 -- # set +x 00:05:30.682 14:00:33 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:30.682 [2024-12-08 14:00:33.540450] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:30.682 [2024-12-08 14:00:33.540909] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57568 ] 00:05:30.940 [2024-12-08 14:00:33.690411] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.940 [2024-12-08 14:00:33.836932] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:30.940 [2024-12-08 14:00:33.837113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.519 14:00:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:31.519 14:00:34 -- common/autotest_common.sh@862 -- # return 0 00:05:31.519 14:00:34 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:31.519 14:00:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.519 14:00:34 -- common/autotest_common.sh@10 -- # set +x 00:05:31.519 14:00:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.519 14:00:34 -- event/cpu_locks.sh@67 -- # no_locks 00:05:31.519 14:00:34 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:31.519 14:00:34 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:31.519 14:00:34 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:31.519 14:00:34 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:31.519 14:00:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.519 14:00:34 -- common/autotest_common.sh@10 -- # set +x 00:05:31.519 14:00:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.519 14:00:34 -- event/cpu_locks.sh@71 -- # locks_exist 57568 00:05:31.519 14:00:34 -- event/cpu_locks.sh@22 -- # lslocks -p 57568 00:05:31.519 14:00:34 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:31.777 14:00:34 -- event/cpu_locks.sh@73 -- # killprocess 57568 00:05:31.777 14:00:34 -- common/autotest_common.sh@936 -- # '[' -z 57568 ']' 00:05:31.777 14:00:34 -- common/autotest_common.sh@940 -- # kill -0 57568 00:05:31.777 14:00:34 -- common/autotest_common.sh@941 -- # uname 00:05:31.777 14:00:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:31.777 14:00:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57568 00:05:31.777 14:00:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:31.777 14:00:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:31.777 killing process with pid 57568 00:05:31.777 14:00:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57568' 00:05:31.777 14:00:34 -- common/autotest_common.sh@955 -- # kill 57568 00:05:31.777 14:00:34 -- common/autotest_common.sh@960 -- # wait 57568 00:05:33.156 00:05:33.156 real 0m2.384s 00:05:33.156 user 0m2.385s 00:05:33.156 sys 0m0.408s 00:05:33.156 14:00:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:33.156 14:00:35 -- common/autotest_common.sh@10 -- # set +x 00:05:33.156 ************************************ 00:05:33.156 END TEST default_locks_via_rpc 00:05:33.156 ************************************ 00:05:33.156 14:00:35 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:33.156 14:00:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:33.156 14:00:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:33.156 14:00:35 -- common/autotest_common.sh@10 -- # set +x 00:05:33.156 ************************************ 00:05:33.156 START TEST non_locking_app_on_locked_coremask 00:05:33.156 ************************************ 00:05:33.156 14:00:35 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:05:33.156 14:00:35 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=57626 00:05:33.156 14:00:35 -- event/cpu_locks.sh@81 -- # waitforlisten 57626 /var/tmp/spdk.sock 00:05:33.156 14:00:35 -- common/autotest_common.sh@829 -- # '[' -z 57626 ']' 00:05:33.156 14:00:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.156 14:00:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:33.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.156 14:00:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.156 14:00:35 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:33.156 14:00:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:33.156 14:00:35 -- common/autotest_common.sh@10 -- # set +x 00:05:33.156 [2024-12-08 14:00:35.954846] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:33.156 [2024-12-08 14:00:35.954942] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57626 ] 00:05:33.413 [2024-12-08 14:00:36.096321] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.413 [2024-12-08 14:00:36.258943] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:33.413 [2024-12-08 14:00:36.259139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.978 14:00:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:33.978 14:00:36 -- common/autotest_common.sh@862 -- # return 0 00:05:33.978 14:00:36 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=57642 00:05:33.978 14:00:36 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:33.978 14:00:36 -- event/cpu_locks.sh@85 -- # waitforlisten 57642 /var/tmp/spdk2.sock 00:05:33.978 14:00:36 -- common/autotest_common.sh@829 -- # '[' -z 57642 ']' 00:05:33.978 14:00:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:33.978 14:00:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:33.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:33.978 14:00:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:33.978 14:00:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:33.978 14:00:36 -- common/autotest_common.sh@10 -- # set +x 00:05:33.978 [2024-12-08 14:00:36.825487] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:33.978 [2024-12-08 14:00:36.825588] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57642 ] 00:05:34.237 [2024-12-08 14:00:36.973329] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:34.237 [2024-12-08 14:00:36.973372] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.496 [2024-12-08 14:00:37.275055] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:34.496 [2024-12-08 14:00:37.275204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.431 14:00:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:35.431 14:00:38 -- common/autotest_common.sh@862 -- # return 0 00:05:35.431 14:00:38 -- event/cpu_locks.sh@87 -- # locks_exist 57626 00:05:35.431 14:00:38 -- event/cpu_locks.sh@22 -- # lslocks -p 57626 00:05:35.431 14:00:38 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:35.997 14:00:38 -- event/cpu_locks.sh@89 -- # killprocess 57626 00:05:35.997 14:00:38 -- common/autotest_common.sh@936 -- # '[' -z 57626 ']' 00:05:35.997 14:00:38 -- common/autotest_common.sh@940 -- # kill -0 57626 00:05:35.997 14:00:38 -- common/autotest_common.sh@941 -- # uname 00:05:35.997 14:00:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:35.997 14:00:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57626 00:05:35.997 14:00:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:35.997 14:00:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:35.997 killing process with pid 57626 00:05:35.997 14:00:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57626' 00:05:35.997 14:00:38 -- common/autotest_common.sh@955 -- # kill 57626 00:05:35.997 14:00:38 -- common/autotest_common.sh@960 -- # wait 57626 00:05:38.527 14:00:40 -- event/cpu_locks.sh@90 -- # killprocess 57642 00:05:38.527 14:00:40 -- common/autotest_common.sh@936 -- # '[' -z 57642 ']' 00:05:38.527 14:00:40 -- common/autotest_common.sh@940 -- # kill -0 57642 00:05:38.527 14:00:40 -- common/autotest_common.sh@941 -- # uname 00:05:38.527 14:00:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:38.527 14:00:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57642 00:05:38.527 14:00:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:38.527 killing process with pid 57642 00:05:38.527 14:00:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:38.527 14:00:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57642' 00:05:38.527 14:00:40 -- common/autotest_common.sh@955 -- # kill 57642 00:05:38.527 14:00:40 -- common/autotest_common.sh@960 -- # wait 57642 00:05:39.462 00:05:39.462 real 0m6.262s 00:05:39.462 user 0m6.614s 00:05:39.462 sys 0m0.815s 00:05:39.462 14:00:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:39.462 14:00:42 -- common/autotest_common.sh@10 -- # set +x 00:05:39.462 ************************************ 00:05:39.462 END TEST non_locking_app_on_locked_coremask 00:05:39.462 ************************************ 00:05:39.462 14:00:42 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:39.462 14:00:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:39.462 14:00:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:39.462 14:00:42 -- common/autotest_common.sh@10 -- # set +x 00:05:39.462 ************************************ 00:05:39.462 START TEST locking_app_on_unlocked_coremask 00:05:39.462 ************************************ 00:05:39.462 14:00:42 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:05:39.462 14:00:42 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=57740 00:05:39.462 14:00:42 -- event/cpu_locks.sh@99 -- # waitforlisten 57740 /var/tmp/spdk.sock 00:05:39.462 14:00:42 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:39.462 14:00:42 -- common/autotest_common.sh@829 -- # '[' -z 57740 ']' 00:05:39.462 14:00:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.462 14:00:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:39.462 14:00:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.462 14:00:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:39.463 14:00:42 -- common/autotest_common.sh@10 -- # set +x 00:05:39.463 [2024-12-08 14:00:42.269035] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:39.463 [2024-12-08 14:00:42.269139] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57740 ] 00:05:39.723 [2024-12-08 14:00:42.417767] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:39.723 [2024-12-08 14:00:42.417809] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.723 [2024-12-08 14:00:42.591526] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:39.723 [2024-12-08 14:00:42.591729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.121 14:00:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:41.122 14:00:43 -- common/autotest_common.sh@862 -- # return 0 00:05:41.122 14:00:43 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=57764 00:05:41.122 14:00:43 -- event/cpu_locks.sh@103 -- # waitforlisten 57764 /var/tmp/spdk2.sock 00:05:41.122 14:00:43 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:41.122 14:00:43 -- common/autotest_common.sh@829 -- # '[' -z 57764 ']' 00:05:41.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:41.122 14:00:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:41.122 14:00:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:41.122 14:00:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:41.122 14:00:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:41.122 14:00:43 -- common/autotest_common.sh@10 -- # set +x 00:05:41.122 [2024-12-08 14:00:43.793582] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:41.122 [2024-12-08 14:00:43.793691] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57764 ] 00:05:41.122 [2024-12-08 14:00:43.947100] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.383 [2024-12-08 14:00:44.300456] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:41.383 [2024-12-08 14:00:44.300661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.305 14:00:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:43.305 14:00:45 -- common/autotest_common.sh@862 -- # return 0 00:05:43.305 14:00:45 -- event/cpu_locks.sh@105 -- # locks_exist 57764 00:05:43.305 14:00:45 -- event/cpu_locks.sh@22 -- # lslocks -p 57764 00:05:43.305 14:00:45 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:43.305 14:00:46 -- event/cpu_locks.sh@107 -- # killprocess 57740 00:05:43.305 14:00:46 -- common/autotest_common.sh@936 -- # '[' -z 57740 ']' 00:05:43.305 14:00:46 -- common/autotest_common.sh@940 -- # kill -0 57740 00:05:43.305 14:00:46 -- common/autotest_common.sh@941 -- # uname 00:05:43.305 14:00:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:43.305 14:00:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57740 00:05:43.564 killing process with pid 57740 00:05:43.564 14:00:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:43.564 14:00:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:43.564 14:00:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57740' 00:05:43.564 14:00:46 -- common/autotest_common.sh@955 -- # kill 57740 00:05:43.564 14:00:46 -- common/autotest_common.sh@960 -- # wait 57740 00:05:46.861 14:00:49 -- event/cpu_locks.sh@108 -- # killprocess 57764 00:05:46.861 14:00:49 -- common/autotest_common.sh@936 -- # '[' -z 57764 ']' 00:05:46.861 14:00:49 -- common/autotest_common.sh@940 -- # kill -0 57764 00:05:46.862 14:00:49 -- common/autotest_common.sh@941 -- # uname 00:05:46.862 14:00:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:46.862 14:00:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57764 00:05:46.862 killing process with pid 57764 00:05:46.862 14:00:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:46.862 14:00:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:46.862 14:00:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57764' 00:05:46.862 14:00:49 -- common/autotest_common.sh@955 -- # kill 57764 00:05:46.862 14:00:49 -- common/autotest_common.sh@960 -- # wait 57764 00:05:47.794 00:05:47.794 real 0m8.186s 00:05:47.794 user 0m8.757s 00:05:47.794 sys 0m0.877s 00:05:47.794 14:00:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:47.794 ************************************ 00:05:47.794 END TEST locking_app_on_unlocked_coremask 00:05:47.794 14:00:50 -- common/autotest_common.sh@10 -- # set +x 00:05:47.794 ************************************ 00:05:47.794 14:00:50 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:47.794 14:00:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:47.794 14:00:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:47.794 14:00:50 -- common/autotest_common.sh@10 -- # set +x 00:05:47.794 ************************************ 00:05:47.794 START TEST locking_app_on_locked_coremask 00:05:47.794 ************************************ 00:05:47.794 14:00:50 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:05:47.794 14:00:50 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=57874 00:05:47.794 14:00:50 -- event/cpu_locks.sh@116 -- # waitforlisten 57874 /var/tmp/spdk.sock 00:05:47.794 14:00:50 -- common/autotest_common.sh@829 -- # '[' -z 57874 ']' 00:05:47.794 14:00:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.794 14:00:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:47.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.794 14:00:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.794 14:00:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:47.794 14:00:50 -- common/autotest_common.sh@10 -- # set +x 00:05:47.794 14:00:50 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:47.794 [2024-12-08 14:00:50.510521] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:47.794 [2024-12-08 14:00:50.510790] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57874 ] 00:05:47.794 [2024-12-08 14:00:50.660447] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.052 [2024-12-08 14:00:50.844935] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:48.052 [2024-12-08 14:00:50.845180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.431 14:00:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:49.431 14:00:52 -- common/autotest_common.sh@862 -- # return 0 00:05:49.431 14:00:52 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=57899 00:05:49.431 14:00:52 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 57899 /var/tmp/spdk2.sock 00:05:49.431 14:00:52 -- common/autotest_common.sh@650 -- # local es=0 00:05:49.431 14:00:52 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 57899 /var/tmp/spdk2.sock 00:05:49.431 14:00:52 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:49.431 14:00:52 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:49.431 14:00:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:49.431 14:00:52 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:49.431 14:00:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:49.431 14:00:52 -- common/autotest_common.sh@653 -- # waitforlisten 57899 /var/tmp/spdk2.sock 00:05:49.431 14:00:52 -- common/autotest_common.sh@829 -- # '[' -z 57899 ']' 00:05:49.431 14:00:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:49.431 14:00:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:49.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:49.431 14:00:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:49.431 14:00:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:49.431 14:00:52 -- common/autotest_common.sh@10 -- # set +x 00:05:49.431 [2024-12-08 14:00:52.076761] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:49.431 [2024-12-08 14:00:52.076897] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57899 ] 00:05:49.431 [2024-12-08 14:00:52.233247] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 57874 has claimed it. 00:05:49.431 [2024-12-08 14:00:52.233309] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:50.001 ERROR: process (pid: 57899) is no longer running 00:05:50.001 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (57899) - No such process 00:05:50.001 14:00:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:50.001 14:00:52 -- common/autotest_common.sh@862 -- # return 1 00:05:50.001 14:00:52 -- common/autotest_common.sh@653 -- # es=1 00:05:50.001 14:00:52 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:50.001 14:00:52 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:50.001 14:00:52 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:50.001 14:00:52 -- event/cpu_locks.sh@122 -- # locks_exist 57874 00:05:50.001 14:00:52 -- event/cpu_locks.sh@22 -- # lslocks -p 57874 00:05:50.001 14:00:52 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:50.001 14:00:52 -- event/cpu_locks.sh@124 -- # killprocess 57874 00:05:50.001 14:00:52 -- common/autotest_common.sh@936 -- # '[' -z 57874 ']' 00:05:50.001 14:00:52 -- common/autotest_common.sh@940 -- # kill -0 57874 00:05:50.001 14:00:52 -- common/autotest_common.sh@941 -- # uname 00:05:50.001 14:00:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:50.001 14:00:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57874 00:05:50.270 killing process with pid 57874 00:05:50.270 14:00:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:50.270 14:00:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:50.270 14:00:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57874' 00:05:50.270 14:00:52 -- common/autotest_common.sh@955 -- # kill 57874 00:05:50.270 14:00:52 -- common/autotest_common.sh@960 -- # wait 57874 00:05:51.712 00:05:51.712 real 0m3.813s 00:05:51.712 user 0m4.151s 00:05:51.712 sys 0m0.566s 00:05:51.712 14:00:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:51.712 14:00:54 -- common/autotest_common.sh@10 -- # set +x 00:05:51.712 ************************************ 00:05:51.712 END TEST locking_app_on_locked_coremask 00:05:51.712 ************************************ 00:05:51.712 14:00:54 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:51.712 14:00:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:51.712 14:00:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:51.712 14:00:54 -- common/autotest_common.sh@10 -- # set +x 00:05:51.712 ************************************ 00:05:51.712 START TEST locking_overlapped_coremask 00:05:51.712 ************************************ 00:05:51.712 14:00:54 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:05:51.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.712 14:00:54 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=57952 00:05:51.712 14:00:54 -- event/cpu_locks.sh@133 -- # waitforlisten 57952 /var/tmp/spdk.sock 00:05:51.712 14:00:54 -- common/autotest_common.sh@829 -- # '[' -z 57952 ']' 00:05:51.712 14:00:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.712 14:00:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:51.712 14:00:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.712 14:00:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:51.712 14:00:54 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:51.712 14:00:54 -- common/autotest_common.sh@10 -- # set +x 00:05:51.712 [2024-12-08 14:00:54.386157] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:51.712 [2024-12-08 14:00:54.386408] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57952 ] 00:05:51.712 [2024-12-08 14:00:54.532532] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:51.971 [2024-12-08 14:00:54.674082] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:51.971 [2024-12-08 14:00:54.674402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.971 [2024-12-08 14:00:54.675244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:51.971 [2024-12-08 14:00:54.675316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.229 14:00:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:52.229 14:00:55 -- common/autotest_common.sh@862 -- # return 0 00:05:52.229 14:00:55 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=57970 00:05:52.229 14:00:55 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 57970 /var/tmp/spdk2.sock 00:05:52.229 14:00:55 -- common/autotest_common.sh@650 -- # local es=0 00:05:52.229 14:00:55 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 57970 /var/tmp/spdk2.sock 00:05:52.229 14:00:55 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:52.229 14:00:55 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:52.229 14:00:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:52.229 14:00:55 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:52.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:52.229 14:00:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:52.229 14:00:55 -- common/autotest_common.sh@653 -- # waitforlisten 57970 /var/tmp/spdk2.sock 00:05:52.229 14:00:55 -- common/autotest_common.sh@829 -- # '[' -z 57970 ']' 00:05:52.229 14:00:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:52.229 14:00:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:52.229 14:00:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:52.229 14:00:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:52.229 14:00:55 -- common/autotest_common.sh@10 -- # set +x 00:05:52.486 [2024-12-08 14:00:55.180083] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:52.486 [2024-12-08 14:00:55.180483] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57970 ] 00:05:52.486 [2024-12-08 14:00:55.335141] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 57952 has claimed it. 00:05:52.486 [2024-12-08 14:00:55.335198] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:53.052 ERROR: process (pid: 57970) is no longer running 00:05:53.052 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (57970) - No such process 00:05:53.052 14:00:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:53.052 14:00:55 -- common/autotest_common.sh@862 -- # return 1 00:05:53.052 14:00:55 -- common/autotest_common.sh@653 -- # es=1 00:05:53.052 14:00:55 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:53.052 14:00:55 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:53.052 14:00:55 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:53.052 14:00:55 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:53.052 14:00:55 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:53.052 14:00:55 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:53.052 14:00:55 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:53.052 14:00:55 -- event/cpu_locks.sh@141 -- # killprocess 57952 00:05:53.052 14:00:55 -- common/autotest_common.sh@936 -- # '[' -z 57952 ']' 00:05:53.052 14:00:55 -- common/autotest_common.sh@940 -- # kill -0 57952 00:05:53.052 14:00:55 -- common/autotest_common.sh@941 -- # uname 00:05:53.052 14:00:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:53.052 14:00:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57952 00:05:53.052 14:00:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:53.052 14:00:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:53.052 14:00:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57952' 00:05:53.052 killing process with pid 57952 00:05:53.052 14:00:55 -- common/autotest_common.sh@955 -- # kill 57952 00:05:53.052 14:00:55 -- common/autotest_common.sh@960 -- # wait 57952 00:05:54.429 00:05:54.429 real 0m2.714s 00:05:54.429 user 0m7.027s 00:05:54.429 sys 0m0.387s 00:05:54.429 14:00:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:54.429 ************************************ 00:05:54.429 14:00:57 -- common/autotest_common.sh@10 -- # set +x 00:05:54.429 END TEST locking_overlapped_coremask 00:05:54.429 ************************************ 00:05:54.429 14:00:57 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:54.429 14:00:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:54.429 14:00:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:54.429 14:00:57 -- common/autotest_common.sh@10 -- # set +x 00:05:54.429 ************************************ 00:05:54.429 START TEST locking_overlapped_coremask_via_rpc 00:05:54.429 ************************************ 00:05:54.429 14:00:57 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:05:54.429 14:00:57 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=58023 00:05:54.429 14:00:57 -- event/cpu_locks.sh@149 -- # waitforlisten 58023 /var/tmp/spdk.sock 00:05:54.429 14:00:57 -- common/autotest_common.sh@829 -- # '[' -z 58023 ']' 00:05:54.429 14:00:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.429 14:00:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:54.429 14:00:57 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:54.429 14:00:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.429 14:00:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:54.429 14:00:57 -- common/autotest_common.sh@10 -- # set +x 00:05:54.429 [2024-12-08 14:00:57.148877] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:54.429 [2024-12-08 14:00:57.149014] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58023 ] 00:05:54.429 [2024-12-08 14:00:57.298657] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:54.429 [2024-12-08 14:00:57.298702] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:54.690 [2024-12-08 14:00:57.492916] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:54.690 [2024-12-08 14:00:57.493468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.690 [2024-12-08 14:00:57.493683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:54.690 [2024-12-08 14:00:57.493848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:56.074 14:00:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:56.074 14:00:58 -- common/autotest_common.sh@862 -- # return 0 00:05:56.074 14:00:58 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=58043 00:05:56.074 14:00:58 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:56.074 14:00:58 -- event/cpu_locks.sh@153 -- # waitforlisten 58043 /var/tmp/spdk2.sock 00:05:56.074 14:00:58 -- common/autotest_common.sh@829 -- # '[' -z 58043 ']' 00:05:56.074 14:00:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:56.074 14:00:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:56.074 14:00:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:56.074 14:00:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.074 14:00:58 -- common/autotest_common.sh@10 -- # set +x 00:05:56.074 [2024-12-08 14:00:58.772873] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:56.074 [2024-12-08 14:00:58.773347] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58043 ] 00:05:56.074 [2024-12-08 14:00:58.933722] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:56.074 [2024-12-08 14:00:58.933779] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:56.646 [2024-12-08 14:00:59.397319] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:56.646 [2024-12-08 14:00:59.397842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:56.646 [2024-12-08 14:00:59.401261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:56.646 [2024-12-08 14:00:59.401307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:58.550 14:01:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.550 14:01:01 -- common/autotest_common.sh@862 -- # return 0 00:05:58.550 14:01:01 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:58.550 14:01:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.550 14:01:01 -- common/autotest_common.sh@10 -- # set +x 00:05:58.550 14:01:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.550 14:01:01 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:58.550 14:01:01 -- common/autotest_common.sh@650 -- # local es=0 00:05:58.550 14:01:01 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:58.550 14:01:01 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:58.550 14:01:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:58.550 14:01:01 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:58.550 14:01:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:58.550 14:01:01 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:58.550 14:01:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.550 14:01:01 -- common/autotest_common.sh@10 -- # set +x 00:05:58.550 [2024-12-08 14:01:01.023141] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58023 has claimed it. 00:05:58.550 request: 00:05:58.550 { 00:05:58.550 "method": "framework_enable_cpumask_locks", 00:05:58.550 "req_id": 1 00:05:58.550 } 00:05:58.550 Got JSON-RPC error response 00:05:58.550 response: 00:05:58.550 { 00:05:58.550 "code": -32603, 00:05:58.550 "message": "Failed to claim CPU core: 2" 00:05:58.550 } 00:05:58.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.550 14:01:01 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:58.550 14:01:01 -- common/autotest_common.sh@653 -- # es=1 00:05:58.550 14:01:01 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:58.550 14:01:01 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:58.550 14:01:01 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:58.550 14:01:01 -- event/cpu_locks.sh@158 -- # waitforlisten 58023 /var/tmp/spdk.sock 00:05:58.550 14:01:01 -- common/autotest_common.sh@829 -- # '[' -z 58023 ']' 00:05:58.550 14:01:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.550 14:01:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:58.550 14:01:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.550 14:01:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:58.550 14:01:01 -- common/autotest_common.sh@10 -- # set +x 00:05:58.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:58.550 14:01:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.550 14:01:01 -- common/autotest_common.sh@862 -- # return 0 00:05:58.550 14:01:01 -- event/cpu_locks.sh@159 -- # waitforlisten 58043 /var/tmp/spdk2.sock 00:05:58.550 14:01:01 -- common/autotest_common.sh@829 -- # '[' -z 58043 ']' 00:05:58.550 14:01:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:58.550 14:01:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:58.550 14:01:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:58.550 14:01:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:58.550 14:01:01 -- common/autotest_common.sh@10 -- # set +x 00:05:58.550 14:01:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.550 14:01:01 -- common/autotest_common.sh@862 -- # return 0 00:05:58.550 14:01:01 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:58.550 14:01:01 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:58.550 14:01:01 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:58.550 14:01:01 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:58.550 ************************************ 00:05:58.550 END TEST locking_overlapped_coremask_via_rpc 00:05:58.550 ************************************ 00:05:58.550 00:05:58.550 real 0m4.337s 00:05:58.550 user 0m1.544s 00:05:58.550 sys 0m0.215s 00:05:58.550 14:01:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:58.550 14:01:01 -- common/autotest_common.sh@10 -- # set +x 00:05:58.550 14:01:01 -- event/cpu_locks.sh@174 -- # cleanup 00:05:58.550 14:01:01 -- event/cpu_locks.sh@15 -- # [[ -z 58023 ]] 00:05:58.550 14:01:01 -- event/cpu_locks.sh@15 -- # killprocess 58023 00:05:58.550 14:01:01 -- common/autotest_common.sh@936 -- # '[' -z 58023 ']' 00:05:58.550 14:01:01 -- common/autotest_common.sh@940 -- # kill -0 58023 00:05:58.550 14:01:01 -- common/autotest_common.sh@941 -- # uname 00:05:58.550 14:01:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:58.550 14:01:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58023 00:05:58.808 killing process with pid 58023 00:05:58.808 14:01:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:58.808 14:01:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:58.808 14:01:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58023' 00:05:58.808 14:01:01 -- common/autotest_common.sh@955 -- # kill 58023 00:05:58.808 14:01:01 -- common/autotest_common.sh@960 -- # wait 58023 00:05:59.743 14:01:02 -- event/cpu_locks.sh@16 -- # [[ -z 58043 ]] 00:05:59.743 14:01:02 -- event/cpu_locks.sh@16 -- # killprocess 58043 00:05:59.743 14:01:02 -- common/autotest_common.sh@936 -- # '[' -z 58043 ']' 00:05:59.743 14:01:02 -- common/autotest_common.sh@940 -- # kill -0 58043 00:05:59.743 14:01:02 -- common/autotest_common.sh@941 -- # uname 00:05:59.743 14:01:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:59.743 14:01:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58043 00:06:00.003 14:01:02 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:00.003 14:01:02 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:00.003 killing process with pid 58043 00:06:00.003 14:01:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58043' 00:06:00.003 14:01:02 -- common/autotest_common.sh@955 -- # kill 58043 00:06:00.003 14:01:02 -- common/autotest_common.sh@960 -- # wait 58043 00:06:01.384 14:01:03 -- event/cpu_locks.sh@18 -- # rm -f 00:06:01.384 Process with pid 58023 is not found 00:06:01.384 Process with pid 58043 is not found 00:06:01.384 14:01:03 -- event/cpu_locks.sh@1 -- # cleanup 00:06:01.384 14:01:03 -- event/cpu_locks.sh@15 -- # [[ -z 58023 ]] 00:06:01.384 14:01:03 -- event/cpu_locks.sh@15 -- # killprocess 58023 00:06:01.384 14:01:03 -- common/autotest_common.sh@936 -- # '[' -z 58023 ']' 00:06:01.384 14:01:03 -- common/autotest_common.sh@940 -- # kill -0 58023 00:06:01.384 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (58023) - No such process 00:06:01.384 14:01:03 -- common/autotest_common.sh@963 -- # echo 'Process with pid 58023 is not found' 00:06:01.384 14:01:03 -- event/cpu_locks.sh@16 -- # [[ -z 58043 ]] 00:06:01.384 14:01:03 -- event/cpu_locks.sh@16 -- # killprocess 58043 00:06:01.384 14:01:03 -- common/autotest_common.sh@936 -- # '[' -z 58043 ']' 00:06:01.384 14:01:03 -- common/autotest_common.sh@940 -- # kill -0 58043 00:06:01.384 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (58043) - No such process 00:06:01.384 14:01:03 -- common/autotest_common.sh@963 -- # echo 'Process with pid 58043 is not found' 00:06:01.384 14:01:03 -- event/cpu_locks.sh@18 -- # rm -f 00:06:01.384 ************************************ 00:06:01.384 END TEST cpu_locks 00:06:01.384 ************************************ 00:06:01.384 00:06:01.384 real 0m33.865s 00:06:01.384 user 0m58.463s 00:06:01.384 sys 0m4.724s 00:06:01.384 14:01:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:01.384 14:01:03 -- common/autotest_common.sh@10 -- # set +x 00:06:01.384 ************************************ 00:06:01.384 END TEST event 00:06:01.384 ************************************ 00:06:01.384 00:06:01.384 real 1m0.355s 00:06:01.384 user 1m48.598s 00:06:01.384 sys 0m7.866s 00:06:01.384 14:01:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:01.384 14:01:03 -- common/autotest_common.sh@10 -- # set +x 00:06:01.384 14:01:03 -- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:01.384 14:01:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:01.384 14:01:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:01.384 14:01:03 -- common/autotest_common.sh@10 -- # set +x 00:06:01.384 ************************************ 00:06:01.384 START TEST thread 00:06:01.384 ************************************ 00:06:01.384 14:01:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:01.384 * Looking for test storage... 00:06:01.384 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:01.384 14:01:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:01.384 14:01:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:01.384 14:01:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:01.384 14:01:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:01.384 14:01:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:01.384 14:01:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:01.384 14:01:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:01.384 14:01:04 -- scripts/common.sh@335 -- # IFS=.-: 00:06:01.384 14:01:04 -- scripts/common.sh@335 -- # read -ra ver1 00:06:01.384 14:01:04 -- scripts/common.sh@336 -- # IFS=.-: 00:06:01.384 14:01:04 -- scripts/common.sh@336 -- # read -ra ver2 00:06:01.384 14:01:04 -- scripts/common.sh@337 -- # local 'op=<' 00:06:01.384 14:01:04 -- scripts/common.sh@339 -- # ver1_l=2 00:06:01.384 14:01:04 -- scripts/common.sh@340 -- # ver2_l=1 00:06:01.384 14:01:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:01.384 14:01:04 -- scripts/common.sh@343 -- # case "$op" in 00:06:01.384 14:01:04 -- scripts/common.sh@344 -- # : 1 00:06:01.384 14:01:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:01.384 14:01:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:01.384 14:01:04 -- scripts/common.sh@364 -- # decimal 1 00:06:01.384 14:01:04 -- scripts/common.sh@352 -- # local d=1 00:06:01.384 14:01:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:01.384 14:01:04 -- scripts/common.sh@354 -- # echo 1 00:06:01.384 14:01:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:01.384 14:01:04 -- scripts/common.sh@365 -- # decimal 2 00:06:01.384 14:01:04 -- scripts/common.sh@352 -- # local d=2 00:06:01.384 14:01:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:01.384 14:01:04 -- scripts/common.sh@354 -- # echo 2 00:06:01.384 14:01:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:01.384 14:01:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:01.384 14:01:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:01.384 14:01:04 -- scripts/common.sh@367 -- # return 0 00:06:01.384 14:01:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:01.384 14:01:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:01.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.384 --rc genhtml_branch_coverage=1 00:06:01.384 --rc genhtml_function_coverage=1 00:06:01.384 --rc genhtml_legend=1 00:06:01.384 --rc geninfo_all_blocks=1 00:06:01.384 --rc geninfo_unexecuted_blocks=1 00:06:01.384 00:06:01.384 ' 00:06:01.384 14:01:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:01.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.384 --rc genhtml_branch_coverage=1 00:06:01.384 --rc genhtml_function_coverage=1 00:06:01.384 --rc genhtml_legend=1 00:06:01.384 --rc geninfo_all_blocks=1 00:06:01.384 --rc geninfo_unexecuted_blocks=1 00:06:01.384 00:06:01.384 ' 00:06:01.384 14:01:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:01.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.384 --rc genhtml_branch_coverage=1 00:06:01.384 --rc genhtml_function_coverage=1 00:06:01.384 --rc genhtml_legend=1 00:06:01.384 --rc geninfo_all_blocks=1 00:06:01.384 --rc geninfo_unexecuted_blocks=1 00:06:01.384 00:06:01.384 ' 00:06:01.384 14:01:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:01.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.384 --rc genhtml_branch_coverage=1 00:06:01.384 --rc genhtml_function_coverage=1 00:06:01.384 --rc genhtml_legend=1 00:06:01.384 --rc geninfo_all_blocks=1 00:06:01.384 --rc geninfo_unexecuted_blocks=1 00:06:01.384 00:06:01.384 ' 00:06:01.384 14:01:04 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:01.384 14:01:04 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:01.384 14:01:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:01.384 14:01:04 -- common/autotest_common.sh@10 -- # set +x 00:06:01.384 ************************************ 00:06:01.384 START TEST thread_poller_perf 00:06:01.384 ************************************ 00:06:01.384 14:01:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:01.384 [2024-12-08 14:01:04.140277] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:01.384 [2024-12-08 14:01:04.140487] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58217 ] 00:06:01.384 [2024-12-08 14:01:04.290074] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.648 [2024-12-08 14:01:04.464743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.648 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:03.033 [2024-12-08T14:01:05.953Z] ====================================== 00:06:03.033 [2024-12-08T14:01:05.953Z] busy:2616851780 (cyc) 00:06:03.033 [2024-12-08T14:01:05.953Z] total_run_count: 293000 00:06:03.033 [2024-12-08T14:01:05.953Z] tsc_hz: 2600000000 (cyc) 00:06:03.033 [2024-12-08T14:01:05.953Z] ====================================== 00:06:03.033 [2024-12-08T14:01:05.953Z] poller_cost: 8931 (cyc), 3435 (nsec) 00:06:03.033 ************************************ 00:06:03.033 END TEST thread_poller_perf 00:06:03.033 ************************************ 00:06:03.033 00:06:03.033 real 0m1.632s 00:06:03.033 user 0m1.438s 00:06:03.033 sys 0m0.083s 00:06:03.033 14:01:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:03.033 14:01:05 -- common/autotest_common.sh@10 -- # set +x 00:06:03.033 14:01:05 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:03.033 14:01:05 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:03.033 14:01:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:03.033 14:01:05 -- common/autotest_common.sh@10 -- # set +x 00:06:03.033 ************************************ 00:06:03.033 START TEST thread_poller_perf 00:06:03.033 ************************************ 00:06:03.033 14:01:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:03.033 [2024-12-08 14:01:05.825724] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:03.033 [2024-12-08 14:01:05.825828] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58248 ] 00:06:03.293 [2024-12-08 14:01:05.974425] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.293 [2024-12-08 14:01:06.153494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.293 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:04.677 [2024-12-08T14:01:07.597Z] ====================================== 00:06:04.677 [2024-12-08T14:01:07.598Z] busy:2604683720 (cyc) 00:06:04.678 [2024-12-08T14:01:07.598Z] total_run_count: 3973000 00:06:04.678 [2024-12-08T14:01:07.598Z] tsc_hz: 2600000000 (cyc) 00:06:04.678 [2024-12-08T14:01:07.598Z] ====================================== 00:06:04.678 [2024-12-08T14:01:07.598Z] poller_cost: 655 (cyc), 251 (nsec) 00:06:04.678 00:06:04.678 real 0m1.562s 00:06:04.678 user 0m1.384s 00:06:04.678 sys 0m0.070s 00:06:04.678 ************************************ 00:06:04.678 END TEST thread_poller_perf 00:06:04.678 ************************************ 00:06:04.678 14:01:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:04.678 14:01:07 -- common/autotest_common.sh@10 -- # set +x 00:06:04.678 14:01:07 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:04.678 ************************************ 00:06:04.678 END TEST thread 00:06:04.678 ************************************ 00:06:04.678 00:06:04.678 real 0m3.449s 00:06:04.678 user 0m2.940s 00:06:04.678 sys 0m0.268s 00:06:04.678 14:01:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:04.678 14:01:07 -- common/autotest_common.sh@10 -- # set +x 00:06:04.678 14:01:07 -- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:04.678 14:01:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:04.678 14:01:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:04.678 14:01:07 -- common/autotest_common.sh@10 -- # set +x 00:06:04.678 ************************************ 00:06:04.678 START TEST accel 00:06:04.678 ************************************ 00:06:04.678 14:01:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:04.678 * Looking for test storage... 00:06:04.678 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:04.678 14:01:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:04.678 14:01:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:04.678 14:01:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:04.678 14:01:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:04.678 14:01:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:04.678 14:01:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:04.678 14:01:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:04.678 14:01:07 -- scripts/common.sh@335 -- # IFS=.-: 00:06:04.678 14:01:07 -- scripts/common.sh@335 -- # read -ra ver1 00:06:04.678 14:01:07 -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.678 14:01:07 -- scripts/common.sh@336 -- # read -ra ver2 00:06:04.678 14:01:07 -- scripts/common.sh@337 -- # local 'op=<' 00:06:04.678 14:01:07 -- scripts/common.sh@339 -- # ver1_l=2 00:06:04.678 14:01:07 -- scripts/common.sh@340 -- # ver2_l=1 00:06:04.678 14:01:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:04.678 14:01:07 -- scripts/common.sh@343 -- # case "$op" in 00:06:04.678 14:01:07 -- scripts/common.sh@344 -- # : 1 00:06:04.678 14:01:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:04.678 14:01:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.678 14:01:07 -- scripts/common.sh@364 -- # decimal 1 00:06:04.678 14:01:07 -- scripts/common.sh@352 -- # local d=1 00:06:04.678 14:01:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.678 14:01:07 -- scripts/common.sh@354 -- # echo 1 00:06:04.678 14:01:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:04.678 14:01:07 -- scripts/common.sh@365 -- # decimal 2 00:06:04.939 14:01:07 -- scripts/common.sh@352 -- # local d=2 00:06:04.939 14:01:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.939 14:01:07 -- scripts/common.sh@354 -- # echo 2 00:06:04.939 14:01:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:04.939 14:01:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:04.939 14:01:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:04.939 14:01:07 -- scripts/common.sh@367 -- # return 0 00:06:04.939 14:01:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.939 14:01:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:04.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.939 --rc genhtml_branch_coverage=1 00:06:04.939 --rc genhtml_function_coverage=1 00:06:04.939 --rc genhtml_legend=1 00:06:04.939 --rc geninfo_all_blocks=1 00:06:04.939 --rc geninfo_unexecuted_blocks=1 00:06:04.939 00:06:04.939 ' 00:06:04.939 14:01:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:04.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.939 --rc genhtml_branch_coverage=1 00:06:04.939 --rc genhtml_function_coverage=1 00:06:04.939 --rc genhtml_legend=1 00:06:04.939 --rc geninfo_all_blocks=1 00:06:04.939 --rc geninfo_unexecuted_blocks=1 00:06:04.939 00:06:04.939 ' 00:06:04.939 14:01:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:04.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.939 --rc genhtml_branch_coverage=1 00:06:04.939 --rc genhtml_function_coverage=1 00:06:04.939 --rc genhtml_legend=1 00:06:04.939 --rc geninfo_all_blocks=1 00:06:04.939 --rc geninfo_unexecuted_blocks=1 00:06:04.939 00:06:04.939 ' 00:06:04.939 14:01:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:04.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.939 --rc genhtml_branch_coverage=1 00:06:04.939 --rc genhtml_function_coverage=1 00:06:04.939 --rc genhtml_legend=1 00:06:04.939 --rc geninfo_all_blocks=1 00:06:04.939 --rc geninfo_unexecuted_blocks=1 00:06:04.939 00:06:04.939 ' 00:06:04.939 14:01:07 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:04.939 14:01:07 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:04.939 14:01:07 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:04.939 14:01:07 -- accel/accel.sh@59 -- # spdk_tgt_pid=58336 00:06:04.939 14:01:07 -- accel/accel.sh@60 -- # waitforlisten 58336 00:06:04.939 14:01:07 -- common/autotest_common.sh@829 -- # '[' -z 58336 ']' 00:06:04.939 14:01:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.939 14:01:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:04.939 14:01:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.939 14:01:07 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:04.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.939 14:01:07 -- accel/accel.sh@58 -- # build_accel_config 00:06:04.939 14:01:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:04.939 14:01:07 -- common/autotest_common.sh@10 -- # set +x 00:06:04.939 14:01:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:04.939 14:01:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.939 14:01:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.939 14:01:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:04.939 14:01:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:04.939 14:01:07 -- accel/accel.sh@41 -- # local IFS=, 00:06:04.939 14:01:07 -- accel/accel.sh@42 -- # jq -r . 00:06:04.939 [2024-12-08 14:01:07.664014] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:04.939 [2024-12-08 14:01:07.664100] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58336 ] 00:06:04.939 [2024-12-08 14:01:07.808858] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.214 [2024-12-08 14:01:08.012154] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:05.214 [2024-12-08 14:01:08.012393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.603 14:01:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.603 14:01:09 -- common/autotest_common.sh@862 -- # return 0 00:06:06.603 14:01:09 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:06.603 14:01:09 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:06.603 14:01:09 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:06.603 14:01:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.603 14:01:09 -- common/autotest_common.sh@10 -- # set +x 00:06:06.603 14:01:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.603 14:01:09 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.603 14:01:09 -- accel/accel.sh@64 -- # IFS== 00:06:06.603 14:01:09 -- accel/accel.sh@64 -- # read -r opc module 00:06:06.603 14:01:09 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:06.603 14:01:09 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.603 14:01:09 -- accel/accel.sh@64 -- # IFS== 00:06:06.603 14:01:09 -- accel/accel.sh@64 -- # read -r opc module 00:06:06.603 14:01:09 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:06.603 14:01:09 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.603 14:01:09 -- accel/accel.sh@64 -- # IFS== 00:06:06.603 14:01:09 -- accel/accel.sh@64 -- # read -r opc module 00:06:06.603 14:01:09 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:06.603 14:01:09 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.603 14:01:09 -- accel/accel.sh@64 -- # IFS== 00:06:06.603 14:01:09 -- accel/accel.sh@64 -- # read -r opc module 00:06:06.603 14:01:09 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:06.603 14:01:09 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.603 14:01:09 -- accel/accel.sh@64 -- # IFS== 00:06:06.603 14:01:09 -- accel/accel.sh@64 -- # read -r opc module 00:06:06.603 14:01:09 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:06.603 14:01:09 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.603 14:01:09 -- accel/accel.sh@64 -- # IFS== 00:06:06.603 14:01:09 -- accel/accel.sh@64 -- # read -r opc module 00:06:06.603 14:01:09 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:06.603 14:01:09 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.603 14:01:09 -- accel/accel.sh@64 -- # IFS== 00:06:06.603 14:01:09 -- accel/accel.sh@64 -- # read -r opc module 00:06:06.603 14:01:09 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:06.603 14:01:09 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.603 14:01:09 -- accel/accel.sh@64 -- # IFS== 00:06:06.603 14:01:09 -- accel/accel.sh@64 -- # read -r opc module 00:06:06.603 14:01:09 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:06.603 14:01:09 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.603 14:01:09 -- accel/accel.sh@64 -- # IFS== 00:06:06.603 14:01:09 -- accel/accel.sh@64 -- # read -r opc module 00:06:06.603 14:01:09 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:06.603 14:01:09 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.603 14:01:09 -- accel/accel.sh@64 -- # IFS== 00:06:06.603 14:01:09 -- accel/accel.sh@64 -- # read -r opc module 00:06:06.603 14:01:09 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:06.603 14:01:09 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.603 14:01:09 -- accel/accel.sh@64 -- # IFS== 00:06:06.603 14:01:09 -- accel/accel.sh@64 -- # read -r opc module 00:06:06.603 14:01:09 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:06.603 14:01:09 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.603 14:01:09 -- accel/accel.sh@64 -- # IFS== 00:06:06.603 14:01:09 -- accel/accel.sh@64 -- # read -r opc module 00:06:06.603 14:01:09 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:06.603 14:01:09 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.603 14:01:09 -- accel/accel.sh@64 -- # IFS== 00:06:06.603 14:01:09 -- accel/accel.sh@64 -- # read -r opc module 00:06:06.603 14:01:09 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:06.603 14:01:09 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:06.603 14:01:09 -- accel/accel.sh@64 -- # IFS== 00:06:06.603 14:01:09 -- accel/accel.sh@64 -- # read -r opc module 00:06:06.603 14:01:09 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:06.603 14:01:09 -- accel/accel.sh@67 -- # killprocess 58336 00:06:06.603 14:01:09 -- common/autotest_common.sh@936 -- # '[' -z 58336 ']' 00:06:06.603 14:01:09 -- common/autotest_common.sh@940 -- # kill -0 58336 00:06:06.603 14:01:09 -- common/autotest_common.sh@941 -- # uname 00:06:06.603 14:01:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:06.603 14:01:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58336 00:06:06.603 14:01:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:06.603 killing process with pid 58336 00:06:06.603 14:01:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:06.603 14:01:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58336' 00:06:06.603 14:01:09 -- common/autotest_common.sh@955 -- # kill 58336 00:06:06.603 14:01:09 -- common/autotest_common.sh@960 -- # wait 58336 00:06:07.975 14:01:10 -- accel/accel.sh@68 -- # trap - ERR 00:06:07.975 14:01:10 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:07.975 14:01:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:07.975 14:01:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:07.975 14:01:10 -- common/autotest_common.sh@10 -- # set +x 00:06:07.975 14:01:10 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:06:07.975 14:01:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:07.975 14:01:10 -- accel/accel.sh@12 -- # build_accel_config 00:06:07.976 14:01:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:07.976 14:01:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.976 14:01:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.976 14:01:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:07.976 14:01:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:07.976 14:01:10 -- accel/accel.sh@41 -- # local IFS=, 00:06:07.976 14:01:10 -- accel/accel.sh@42 -- # jq -r . 00:06:07.976 14:01:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:07.976 14:01:10 -- common/autotest_common.sh@10 -- # set +x 00:06:07.976 14:01:10 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:07.976 14:01:10 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:07.976 14:01:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:07.976 14:01:10 -- common/autotest_common.sh@10 -- # set +x 00:06:07.976 ************************************ 00:06:07.976 START TEST accel_missing_filename 00:06:07.976 ************************************ 00:06:07.976 14:01:10 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:06:07.976 14:01:10 -- common/autotest_common.sh@650 -- # local es=0 00:06:07.976 14:01:10 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:07.976 14:01:10 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:07.976 14:01:10 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:07.976 14:01:10 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:07.976 14:01:10 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:07.976 14:01:10 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:06:07.976 14:01:10 -- accel/accel.sh@12 -- # build_accel_config 00:06:07.976 14:01:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:07.976 14:01:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:07.976 14:01:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.976 14:01:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.976 14:01:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:07.976 14:01:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:07.976 14:01:10 -- accel/accel.sh@41 -- # local IFS=, 00:06:07.976 14:01:10 -- accel/accel.sh@42 -- # jq -r . 00:06:07.976 [2024-12-08 14:01:10.672689] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:07.976 [2024-12-08 14:01:10.672793] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58408 ] 00:06:07.976 [2024-12-08 14:01:10.822258] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.236 [2024-12-08 14:01:11.003236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.236 [2024-12-08 14:01:11.145880] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:08.806 [2024-12-08 14:01:11.486434] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:09.068 A filename is required. 00:06:09.068 14:01:11 -- common/autotest_common.sh@653 -- # es=234 00:06:09.068 14:01:11 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:09.068 14:01:11 -- common/autotest_common.sh@662 -- # es=106 00:06:09.068 14:01:11 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:09.068 14:01:11 -- common/autotest_common.sh@670 -- # es=1 00:06:09.068 14:01:11 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:09.068 00:06:09.068 real 0m1.131s 00:06:09.068 user 0m0.913s 00:06:09.068 sys 0m0.137s 00:06:09.068 14:01:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:09.068 14:01:11 -- common/autotest_common.sh@10 -- # set +x 00:06:09.068 ************************************ 00:06:09.068 END TEST accel_missing_filename 00:06:09.068 ************************************ 00:06:09.068 14:01:11 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:09.068 14:01:11 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:09.068 14:01:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:09.068 14:01:11 -- common/autotest_common.sh@10 -- # set +x 00:06:09.068 ************************************ 00:06:09.068 START TEST accel_compress_verify 00:06:09.068 ************************************ 00:06:09.068 14:01:11 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:09.068 14:01:11 -- common/autotest_common.sh@650 -- # local es=0 00:06:09.068 14:01:11 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:09.068 14:01:11 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:09.068 14:01:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.068 14:01:11 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:09.068 14:01:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.068 14:01:11 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:09.068 14:01:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:09.068 14:01:11 -- accel/accel.sh@12 -- # build_accel_config 00:06:09.068 14:01:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:09.068 14:01:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.068 14:01:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.068 14:01:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:09.068 14:01:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:09.068 14:01:11 -- accel/accel.sh@41 -- # local IFS=, 00:06:09.068 14:01:11 -- accel/accel.sh@42 -- # jq -r . 00:06:09.068 [2024-12-08 14:01:11.880503] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:09.068 [2024-12-08 14:01:11.880637] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58439 ] 00:06:09.328 [2024-12-08 14:01:12.031194] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.587 [2024-12-08 14:01:12.277554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.587 [2024-12-08 14:01:12.445566] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:10.159 [2024-12-08 14:01:12.819274] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:10.421 00:06:10.421 Compression does not support the verify option, aborting. 00:06:10.421 14:01:13 -- common/autotest_common.sh@653 -- # es=161 00:06:10.421 14:01:13 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:10.421 14:01:13 -- common/autotest_common.sh@662 -- # es=33 00:06:10.421 14:01:13 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:10.421 14:01:13 -- common/autotest_common.sh@670 -- # es=1 00:06:10.421 14:01:13 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:10.421 00:06:10.421 real 0m1.282s 00:06:10.421 user 0m1.022s 00:06:10.421 sys 0m0.177s 00:06:10.421 14:01:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:10.421 ************************************ 00:06:10.421 END TEST accel_compress_verify 00:06:10.421 ************************************ 00:06:10.421 14:01:13 -- common/autotest_common.sh@10 -- # set +x 00:06:10.421 14:01:13 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:10.421 14:01:13 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:10.421 14:01:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:10.421 14:01:13 -- common/autotest_common.sh@10 -- # set +x 00:06:10.421 ************************************ 00:06:10.421 START TEST accel_wrong_workload 00:06:10.421 ************************************ 00:06:10.421 14:01:13 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:06:10.421 14:01:13 -- common/autotest_common.sh@650 -- # local es=0 00:06:10.421 14:01:13 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:10.421 14:01:13 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:10.421 14:01:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.421 14:01:13 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:10.421 14:01:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.421 14:01:13 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:06:10.421 14:01:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:10.421 14:01:13 -- accel/accel.sh@12 -- # build_accel_config 00:06:10.421 14:01:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:10.421 14:01:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.421 14:01:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.421 14:01:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:10.421 14:01:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:10.421 14:01:13 -- accel/accel.sh@41 -- # local IFS=, 00:06:10.421 14:01:13 -- accel/accel.sh@42 -- # jq -r . 00:06:10.421 Unsupported workload type: foobar 00:06:10.421 [2024-12-08 14:01:13.239454] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:10.421 accel_perf options: 00:06:10.421 [-h help message] 00:06:10.421 [-q queue depth per core] 00:06:10.421 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:10.421 [-T number of threads per core 00:06:10.421 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:10.421 [-t time in seconds] 00:06:10.421 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:10.421 [ dif_verify, , dif_generate, dif_generate_copy 00:06:10.421 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:10.421 [-l for compress/decompress workloads, name of uncompressed input file 00:06:10.421 [-S for crc32c workload, use this seed value (default 0) 00:06:10.421 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:10.421 [-f for fill workload, use this BYTE value (default 255) 00:06:10.421 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:10.422 [-y verify result if this switch is on] 00:06:10.422 [-a tasks to allocate per core (default: same value as -q)] 00:06:10.422 Can be used to spread operations across a wider range of memory. 00:06:10.422 14:01:13 -- common/autotest_common.sh@653 -- # es=1 00:06:10.422 14:01:13 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:10.422 14:01:13 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:10.422 14:01:13 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:10.422 00:06:10.422 real 0m0.072s 00:06:10.422 user 0m0.060s 00:06:10.422 sys 0m0.040s 00:06:10.422 ************************************ 00:06:10.422 END TEST accel_wrong_workload 00:06:10.422 ************************************ 00:06:10.422 14:01:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:10.422 14:01:13 -- common/autotest_common.sh@10 -- # set +x 00:06:10.422 14:01:13 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:10.422 14:01:13 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:10.422 14:01:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:10.422 14:01:13 -- common/autotest_common.sh@10 -- # set +x 00:06:10.422 ************************************ 00:06:10.422 START TEST accel_negative_buffers 00:06:10.422 ************************************ 00:06:10.422 14:01:13 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:10.422 14:01:13 -- common/autotest_common.sh@650 -- # local es=0 00:06:10.422 14:01:13 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:10.422 14:01:13 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:10.422 14:01:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.422 14:01:13 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:10.422 14:01:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.422 14:01:13 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:06:10.422 14:01:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:10.422 14:01:13 -- accel/accel.sh@12 -- # build_accel_config 00:06:10.422 14:01:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:10.422 14:01:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.422 14:01:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.422 14:01:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:10.422 14:01:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:10.422 14:01:13 -- accel/accel.sh@41 -- # local IFS=, 00:06:10.422 14:01:13 -- accel/accel.sh@42 -- # jq -r . 00:06:10.683 -x option must be non-negative. 00:06:10.683 [2024-12-08 14:01:13.366514] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:10.683 accel_perf options: 00:06:10.683 [-h help message] 00:06:10.683 [-q queue depth per core] 00:06:10.683 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:10.683 [-T number of threads per core 00:06:10.683 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:10.683 [-t time in seconds] 00:06:10.683 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:10.683 [ dif_verify, , dif_generate, dif_generate_copy 00:06:10.683 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:10.683 [-l for compress/decompress workloads, name of uncompressed input file 00:06:10.683 [-S for crc32c workload, use this seed value (default 0) 00:06:10.683 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:10.683 [-f for fill workload, use this BYTE value (default 255) 00:06:10.683 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:10.683 [-y verify result if this switch is on] 00:06:10.683 [-a tasks to allocate per core (default: same value as -q)] 00:06:10.683 Can be used to spread operations across a wider range of memory. 00:06:10.683 14:01:13 -- common/autotest_common.sh@653 -- # es=1 00:06:10.683 14:01:13 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:10.683 14:01:13 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:10.683 ************************************ 00:06:10.683 END TEST accel_negative_buffers 00:06:10.683 ************************************ 00:06:10.683 14:01:13 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:10.683 00:06:10.683 real 0m0.065s 00:06:10.683 user 0m0.056s 00:06:10.683 sys 0m0.039s 00:06:10.683 14:01:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:10.683 14:01:13 -- common/autotest_common.sh@10 -- # set +x 00:06:10.683 14:01:13 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:10.683 14:01:13 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:10.683 14:01:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:10.683 14:01:13 -- common/autotest_common.sh@10 -- # set +x 00:06:10.683 ************************************ 00:06:10.683 START TEST accel_crc32c 00:06:10.683 ************************************ 00:06:10.683 14:01:13 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:10.683 14:01:13 -- accel/accel.sh@16 -- # local accel_opc 00:06:10.683 14:01:13 -- accel/accel.sh@17 -- # local accel_module 00:06:10.683 14:01:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:10.683 14:01:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:10.683 14:01:13 -- accel/accel.sh@12 -- # build_accel_config 00:06:10.683 14:01:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:10.683 14:01:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.683 14:01:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.683 14:01:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:10.683 14:01:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:10.683 14:01:13 -- accel/accel.sh@41 -- # local IFS=, 00:06:10.683 14:01:13 -- accel/accel.sh@42 -- # jq -r . 00:06:10.683 [2024-12-08 14:01:13.492525] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:10.683 [2024-12-08 14:01:13.492649] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58517 ] 00:06:10.944 [2024-12-08 14:01:13.641951] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.944 [2024-12-08 14:01:13.829699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.857 14:01:15 -- accel/accel.sh@18 -- # out=' 00:06:12.857 SPDK Configuration: 00:06:12.857 Core mask: 0x1 00:06:12.857 00:06:12.857 Accel Perf Configuration: 00:06:12.857 Workload Type: crc32c 00:06:12.857 CRC-32C seed: 32 00:06:12.857 Transfer size: 4096 bytes 00:06:12.857 Vector count 1 00:06:12.857 Module: software 00:06:12.857 Queue depth: 32 00:06:12.857 Allocate depth: 32 00:06:12.857 # threads/core: 1 00:06:12.857 Run time: 1 seconds 00:06:12.857 Verify: Yes 00:06:12.857 00:06:12.857 Running for 1 seconds... 00:06:12.857 00:06:12.857 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:12.857 ------------------------------------------------------------------------------------ 00:06:12.857 0,0 460320/s 1798 MiB/s 0 0 00:06:12.857 ==================================================================================== 00:06:12.857 Total 460320/s 1798 MiB/s 0 0' 00:06:12.857 14:01:15 -- accel/accel.sh@20 -- # IFS=: 00:06:12.857 14:01:15 -- accel/accel.sh@20 -- # read -r var val 00:06:12.857 14:01:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:12.857 14:01:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:12.857 14:01:15 -- accel/accel.sh@12 -- # build_accel_config 00:06:12.857 14:01:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:12.857 14:01:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.857 14:01:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.857 14:01:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:12.857 14:01:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:12.857 14:01:15 -- accel/accel.sh@41 -- # local IFS=, 00:06:12.857 14:01:15 -- accel/accel.sh@42 -- # jq -r . 00:06:12.857 [2024-12-08 14:01:15.697337] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:12.857 [2024-12-08 14:01:15.697502] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58543 ] 00:06:13.118 [2024-12-08 14:01:15.849628] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.379 [2024-12-08 14:01:16.072275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.379 14:01:16 -- accel/accel.sh@21 -- # val= 00:06:13.379 14:01:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.379 14:01:16 -- accel/accel.sh@20 -- # IFS=: 00:06:13.379 14:01:16 -- accel/accel.sh@20 -- # read -r var val 00:06:13.379 14:01:16 -- accel/accel.sh@21 -- # val= 00:06:13.379 14:01:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.379 14:01:16 -- accel/accel.sh@20 -- # IFS=: 00:06:13.379 14:01:16 -- accel/accel.sh@20 -- # read -r var val 00:06:13.379 14:01:16 -- accel/accel.sh@21 -- # val=0x1 00:06:13.379 14:01:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.379 14:01:16 -- accel/accel.sh@20 -- # IFS=: 00:06:13.379 14:01:16 -- accel/accel.sh@20 -- # read -r var val 00:06:13.379 14:01:16 -- accel/accel.sh@21 -- # val= 00:06:13.379 14:01:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.379 14:01:16 -- accel/accel.sh@20 -- # IFS=: 00:06:13.379 14:01:16 -- accel/accel.sh@20 -- # read -r var val 00:06:13.379 14:01:16 -- accel/accel.sh@21 -- # val= 00:06:13.379 14:01:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.379 14:01:16 -- accel/accel.sh@20 -- # IFS=: 00:06:13.379 14:01:16 -- accel/accel.sh@20 -- # read -r var val 00:06:13.379 14:01:16 -- accel/accel.sh@21 -- # val=crc32c 00:06:13.379 14:01:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.379 14:01:16 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:13.379 14:01:16 -- accel/accel.sh@20 -- # IFS=: 00:06:13.379 14:01:16 -- accel/accel.sh@20 -- # read -r var val 00:06:13.379 14:01:16 -- accel/accel.sh@21 -- # val=32 00:06:13.379 14:01:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.379 14:01:16 -- accel/accel.sh@20 -- # IFS=: 00:06:13.379 14:01:16 -- accel/accel.sh@20 -- # read -r var val 00:06:13.379 14:01:16 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:13.379 14:01:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.379 14:01:16 -- accel/accel.sh@20 -- # IFS=: 00:06:13.379 14:01:16 -- accel/accel.sh@20 -- # read -r var val 00:06:13.379 14:01:16 -- accel/accel.sh@21 -- # val= 00:06:13.379 14:01:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.379 14:01:16 -- accel/accel.sh@20 -- # IFS=: 00:06:13.379 14:01:16 -- accel/accel.sh@20 -- # read -r var val 00:06:13.379 14:01:16 -- accel/accel.sh@21 -- # val=software 00:06:13.379 14:01:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.379 14:01:16 -- accel/accel.sh@23 -- # accel_module=software 00:06:13.379 14:01:16 -- accel/accel.sh@20 -- # IFS=: 00:06:13.379 14:01:16 -- accel/accel.sh@20 -- # read -r var val 00:06:13.379 14:01:16 -- accel/accel.sh@21 -- # val=32 00:06:13.379 14:01:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.379 14:01:16 -- accel/accel.sh@20 -- # IFS=: 00:06:13.379 14:01:16 -- accel/accel.sh@20 -- # read -r var val 00:06:13.379 14:01:16 -- accel/accel.sh@21 -- # val=32 00:06:13.379 14:01:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.379 14:01:16 -- accel/accel.sh@20 -- # IFS=: 00:06:13.379 14:01:16 -- accel/accel.sh@20 -- # read -r var val 00:06:13.379 14:01:16 -- accel/accel.sh@21 -- # val=1 00:06:13.379 14:01:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.379 14:01:16 -- accel/accel.sh@20 -- # IFS=: 00:06:13.379 14:01:16 -- accel/accel.sh@20 -- # read -r var val 00:06:13.379 14:01:16 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:13.379 14:01:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.379 14:01:16 -- accel/accel.sh@20 -- # IFS=: 00:06:13.379 14:01:16 -- accel/accel.sh@20 -- # read -r var val 00:06:13.379 14:01:16 -- accel/accel.sh@21 -- # val=Yes 00:06:13.379 14:01:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.379 14:01:16 -- accel/accel.sh@20 -- # IFS=: 00:06:13.380 14:01:16 -- accel/accel.sh@20 -- # read -r var val 00:06:13.380 14:01:16 -- accel/accel.sh@21 -- # val= 00:06:13.380 14:01:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.380 14:01:16 -- accel/accel.sh@20 -- # IFS=: 00:06:13.380 14:01:16 -- accel/accel.sh@20 -- # read -r var val 00:06:13.380 14:01:16 -- accel/accel.sh@21 -- # val= 00:06:13.380 14:01:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.380 14:01:16 -- accel/accel.sh@20 -- # IFS=: 00:06:13.380 14:01:16 -- accel/accel.sh@20 -- # read -r var val 00:06:15.297 14:01:17 -- accel/accel.sh@21 -- # val= 00:06:15.297 14:01:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.297 14:01:17 -- accel/accel.sh@20 -- # IFS=: 00:06:15.297 14:01:17 -- accel/accel.sh@20 -- # read -r var val 00:06:15.297 14:01:17 -- accel/accel.sh@21 -- # val= 00:06:15.297 14:01:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.297 14:01:17 -- accel/accel.sh@20 -- # IFS=: 00:06:15.297 14:01:17 -- accel/accel.sh@20 -- # read -r var val 00:06:15.297 14:01:17 -- accel/accel.sh@21 -- # val= 00:06:15.297 14:01:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.297 14:01:17 -- accel/accel.sh@20 -- # IFS=: 00:06:15.297 14:01:17 -- accel/accel.sh@20 -- # read -r var val 00:06:15.297 14:01:17 -- accel/accel.sh@21 -- # val= 00:06:15.297 14:01:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.297 14:01:17 -- accel/accel.sh@20 -- # IFS=: 00:06:15.297 14:01:17 -- accel/accel.sh@20 -- # read -r var val 00:06:15.297 14:01:17 -- accel/accel.sh@21 -- # val= 00:06:15.297 14:01:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.297 14:01:17 -- accel/accel.sh@20 -- # IFS=: 00:06:15.297 14:01:17 -- accel/accel.sh@20 -- # read -r var val 00:06:15.297 14:01:17 -- accel/accel.sh@21 -- # val= 00:06:15.297 14:01:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.297 14:01:17 -- accel/accel.sh@20 -- # IFS=: 00:06:15.297 14:01:17 -- accel/accel.sh@20 -- # read -r var val 00:06:15.297 14:01:17 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:15.297 ************************************ 00:06:15.297 END TEST accel_crc32c 00:06:15.297 ************************************ 00:06:15.297 14:01:17 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:15.297 14:01:17 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:15.297 00:06:15.297 real 0m4.452s 00:06:15.297 user 0m3.908s 00:06:15.297 sys 0m0.326s 00:06:15.297 14:01:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:15.297 14:01:17 -- common/autotest_common.sh@10 -- # set +x 00:06:15.297 14:01:17 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:15.297 14:01:17 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:15.297 14:01:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:15.297 14:01:17 -- common/autotest_common.sh@10 -- # set +x 00:06:15.297 ************************************ 00:06:15.297 START TEST accel_crc32c_C2 00:06:15.297 ************************************ 00:06:15.297 14:01:17 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:15.297 14:01:17 -- accel/accel.sh@16 -- # local accel_opc 00:06:15.297 14:01:17 -- accel/accel.sh@17 -- # local accel_module 00:06:15.297 14:01:17 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:15.297 14:01:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:15.297 14:01:17 -- accel/accel.sh@12 -- # build_accel_config 00:06:15.297 14:01:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:15.297 14:01:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.297 14:01:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.297 14:01:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:15.297 14:01:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:15.297 14:01:17 -- accel/accel.sh@41 -- # local IFS=, 00:06:15.297 14:01:17 -- accel/accel.sh@42 -- # jq -r . 00:06:15.297 [2024-12-08 14:01:18.015906] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:15.297 [2024-12-08 14:01:18.016197] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58584 ] 00:06:15.297 [2024-12-08 14:01:18.169452] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.559 [2024-12-08 14:01:18.394886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.477 14:01:20 -- accel/accel.sh@18 -- # out=' 00:06:17.477 SPDK Configuration: 00:06:17.477 Core mask: 0x1 00:06:17.477 00:06:17.477 Accel Perf Configuration: 00:06:17.477 Workload Type: crc32c 00:06:17.477 CRC-32C seed: 0 00:06:17.477 Transfer size: 4096 bytes 00:06:17.477 Vector count 2 00:06:17.477 Module: software 00:06:17.477 Queue depth: 32 00:06:17.477 Allocate depth: 32 00:06:17.477 # threads/core: 1 00:06:17.477 Run time: 1 seconds 00:06:17.477 Verify: Yes 00:06:17.477 00:06:17.477 Running for 1 seconds... 00:06:17.477 00:06:17.477 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:17.477 ------------------------------------------------------------------------------------ 00:06:17.477 0,0 386688/s 3021 MiB/s 0 0 00:06:17.477 ==================================================================================== 00:06:17.477 Total 386688/s 1510 MiB/s 0 0' 00:06:17.477 14:01:20 -- accel/accel.sh@20 -- # IFS=: 00:06:17.477 14:01:20 -- accel/accel.sh@20 -- # read -r var val 00:06:17.477 14:01:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:17.477 14:01:20 -- accel/accel.sh@12 -- # build_accel_config 00:06:17.477 14:01:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:17.477 14:01:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.477 14:01:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.477 14:01:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:17.477 14:01:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:17.477 14:01:20 -- accel/accel.sh@41 -- # local IFS=, 00:06:17.477 14:01:20 -- accel/accel.sh@42 -- # jq -r . 00:06:17.477 14:01:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:17.477 [2024-12-08 14:01:20.278618] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:17.478 [2024-12-08 14:01:20.279103] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58610 ] 00:06:17.738 [2024-12-08 14:01:20.426798] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.738 [2024-12-08 14:01:20.653487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.999 14:01:20 -- accel/accel.sh@21 -- # val= 00:06:17.999 14:01:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.999 14:01:20 -- accel/accel.sh@20 -- # IFS=: 00:06:17.999 14:01:20 -- accel/accel.sh@20 -- # read -r var val 00:06:17.999 14:01:20 -- accel/accel.sh@21 -- # val= 00:06:17.999 14:01:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.999 14:01:20 -- accel/accel.sh@20 -- # IFS=: 00:06:17.999 14:01:20 -- accel/accel.sh@20 -- # read -r var val 00:06:17.999 14:01:20 -- accel/accel.sh@21 -- # val=0x1 00:06:17.999 14:01:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.999 14:01:20 -- accel/accel.sh@20 -- # IFS=: 00:06:17.999 14:01:20 -- accel/accel.sh@20 -- # read -r var val 00:06:17.999 14:01:20 -- accel/accel.sh@21 -- # val= 00:06:17.999 14:01:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.999 14:01:20 -- accel/accel.sh@20 -- # IFS=: 00:06:17.999 14:01:20 -- accel/accel.sh@20 -- # read -r var val 00:06:17.999 14:01:20 -- accel/accel.sh@21 -- # val= 00:06:17.999 14:01:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.999 14:01:20 -- accel/accel.sh@20 -- # IFS=: 00:06:17.999 14:01:20 -- accel/accel.sh@20 -- # read -r var val 00:06:17.999 14:01:20 -- accel/accel.sh@21 -- # val=crc32c 00:06:17.999 14:01:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.999 14:01:20 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:17.999 14:01:20 -- accel/accel.sh@20 -- # IFS=: 00:06:17.999 14:01:20 -- accel/accel.sh@20 -- # read -r var val 00:06:17.999 14:01:20 -- accel/accel.sh@21 -- # val=0 00:06:17.999 14:01:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.999 14:01:20 -- accel/accel.sh@20 -- # IFS=: 00:06:17.999 14:01:20 -- accel/accel.sh@20 -- # read -r var val 00:06:17.999 14:01:20 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:17.999 14:01:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.999 14:01:20 -- accel/accel.sh@20 -- # IFS=: 00:06:17.999 14:01:20 -- accel/accel.sh@20 -- # read -r var val 00:06:17.999 14:01:20 -- accel/accel.sh@21 -- # val= 00:06:17.999 14:01:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.999 14:01:20 -- accel/accel.sh@20 -- # IFS=: 00:06:17.999 14:01:20 -- accel/accel.sh@20 -- # read -r var val 00:06:17.999 14:01:20 -- accel/accel.sh@21 -- # val=software 00:06:17.999 14:01:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.999 14:01:20 -- accel/accel.sh@23 -- # accel_module=software 00:06:17.999 14:01:20 -- accel/accel.sh@20 -- # IFS=: 00:06:17.999 14:01:20 -- accel/accel.sh@20 -- # read -r var val 00:06:17.999 14:01:20 -- accel/accel.sh@21 -- # val=32 00:06:17.999 14:01:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.999 14:01:20 -- accel/accel.sh@20 -- # IFS=: 00:06:18.000 14:01:20 -- accel/accel.sh@20 -- # read -r var val 00:06:18.000 14:01:20 -- accel/accel.sh@21 -- # val=32 00:06:18.000 14:01:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.000 14:01:20 -- accel/accel.sh@20 -- # IFS=: 00:06:18.000 14:01:20 -- accel/accel.sh@20 -- # read -r var val 00:06:18.000 14:01:20 -- accel/accel.sh@21 -- # val=1 00:06:18.000 14:01:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.000 14:01:20 -- accel/accel.sh@20 -- # IFS=: 00:06:18.000 14:01:20 -- accel/accel.sh@20 -- # read -r var val 00:06:18.000 14:01:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:18.000 14:01:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.000 14:01:20 -- accel/accel.sh@20 -- # IFS=: 00:06:18.000 14:01:20 -- accel/accel.sh@20 -- # read -r var val 00:06:18.000 14:01:20 -- accel/accel.sh@21 -- # val=Yes 00:06:18.000 14:01:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.000 14:01:20 -- accel/accel.sh@20 -- # IFS=: 00:06:18.000 14:01:20 -- accel/accel.sh@20 -- # read -r var val 00:06:18.000 14:01:20 -- accel/accel.sh@21 -- # val= 00:06:18.000 14:01:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.000 14:01:20 -- accel/accel.sh@20 -- # IFS=: 00:06:18.000 14:01:20 -- accel/accel.sh@20 -- # read -r var val 00:06:18.000 14:01:20 -- accel/accel.sh@21 -- # val= 00:06:18.000 14:01:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.000 14:01:20 -- accel/accel.sh@20 -- # IFS=: 00:06:18.000 14:01:20 -- accel/accel.sh@20 -- # read -r var val 00:06:19.384 14:01:22 -- accel/accel.sh@21 -- # val= 00:06:19.384 14:01:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.384 14:01:22 -- accel/accel.sh@20 -- # IFS=: 00:06:19.384 14:01:22 -- accel/accel.sh@20 -- # read -r var val 00:06:19.384 14:01:22 -- accel/accel.sh@21 -- # val= 00:06:19.384 14:01:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.384 14:01:22 -- accel/accel.sh@20 -- # IFS=: 00:06:19.384 14:01:22 -- accel/accel.sh@20 -- # read -r var val 00:06:19.384 14:01:22 -- accel/accel.sh@21 -- # val= 00:06:19.384 14:01:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.384 14:01:22 -- accel/accel.sh@20 -- # IFS=: 00:06:19.384 14:01:22 -- accel/accel.sh@20 -- # read -r var val 00:06:19.384 14:01:22 -- accel/accel.sh@21 -- # val= 00:06:19.384 14:01:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.384 14:01:22 -- accel/accel.sh@20 -- # IFS=: 00:06:19.384 14:01:22 -- accel/accel.sh@20 -- # read -r var val 00:06:19.384 14:01:22 -- accel/accel.sh@21 -- # val= 00:06:19.384 14:01:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.384 14:01:22 -- accel/accel.sh@20 -- # IFS=: 00:06:19.384 14:01:22 -- accel/accel.sh@20 -- # read -r var val 00:06:19.384 14:01:22 -- accel/accel.sh@21 -- # val= 00:06:19.384 14:01:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.384 14:01:22 -- accel/accel.sh@20 -- # IFS=: 00:06:19.384 14:01:22 -- accel/accel.sh@20 -- # read -r var val 00:06:19.384 14:01:22 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:19.384 ************************************ 00:06:19.384 END TEST accel_crc32c_C2 00:06:19.384 ************************************ 00:06:19.384 14:01:22 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:19.384 14:01:22 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:19.384 00:06:19.384 real 0m4.310s 00:06:19.384 user 0m3.764s 00:06:19.384 sys 0m0.331s 00:06:19.384 14:01:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:19.384 14:01:22 -- common/autotest_common.sh@10 -- # set +x 00:06:19.642 14:01:22 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:19.642 14:01:22 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:19.642 14:01:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:19.642 14:01:22 -- common/autotest_common.sh@10 -- # set +x 00:06:19.642 ************************************ 00:06:19.642 START TEST accel_copy 00:06:19.642 ************************************ 00:06:19.642 14:01:22 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:06:19.642 14:01:22 -- accel/accel.sh@16 -- # local accel_opc 00:06:19.642 14:01:22 -- accel/accel.sh@17 -- # local accel_module 00:06:19.642 14:01:22 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:19.642 14:01:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:19.642 14:01:22 -- accel/accel.sh@12 -- # build_accel_config 00:06:19.642 14:01:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:19.642 14:01:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.642 14:01:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.642 14:01:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:19.642 14:01:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:19.642 14:01:22 -- accel/accel.sh@41 -- # local IFS=, 00:06:19.642 14:01:22 -- accel/accel.sh@42 -- # jq -r . 00:06:19.642 [2024-12-08 14:01:22.376257] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:19.642 [2024-12-08 14:01:22.376357] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58651 ] 00:06:19.643 [2024-12-08 14:01:22.521740] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.901 [2024-12-08 14:01:22.661224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.802 14:01:24 -- accel/accel.sh@18 -- # out=' 00:06:21.802 SPDK Configuration: 00:06:21.802 Core mask: 0x1 00:06:21.802 00:06:21.802 Accel Perf Configuration: 00:06:21.802 Workload Type: copy 00:06:21.802 Transfer size: 4096 bytes 00:06:21.802 Vector count 1 00:06:21.802 Module: software 00:06:21.802 Queue depth: 32 00:06:21.802 Allocate depth: 32 00:06:21.802 # threads/core: 1 00:06:21.802 Run time: 1 seconds 00:06:21.802 Verify: Yes 00:06:21.802 00:06:21.802 Running for 1 seconds... 00:06:21.802 00:06:21.802 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:21.802 ------------------------------------------------------------------------------------ 00:06:21.802 0,0 374816/s 1464 MiB/s 0 0 00:06:21.802 ==================================================================================== 00:06:21.802 Total 374816/s 1464 MiB/s 0 0' 00:06:21.802 14:01:24 -- accel/accel.sh@20 -- # IFS=: 00:06:21.802 14:01:24 -- accel/accel.sh@20 -- # read -r var val 00:06:21.802 14:01:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:21.802 14:01:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:21.802 14:01:24 -- accel/accel.sh@12 -- # build_accel_config 00:06:21.802 14:01:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:21.802 14:01:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.802 14:01:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.802 14:01:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:21.802 14:01:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:21.802 14:01:24 -- accel/accel.sh@41 -- # local IFS=, 00:06:21.802 14:01:24 -- accel/accel.sh@42 -- # jq -r . 00:06:21.802 [2024-12-08 14:01:24.277449] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:21.802 [2024-12-08 14:01:24.277557] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58677 ] 00:06:21.802 [2024-12-08 14:01:24.422948] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.802 [2024-12-08 14:01:24.561587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.802 14:01:24 -- accel/accel.sh@21 -- # val= 00:06:21.802 14:01:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.802 14:01:24 -- accel/accel.sh@20 -- # IFS=: 00:06:21.802 14:01:24 -- accel/accel.sh@20 -- # read -r var val 00:06:21.802 14:01:24 -- accel/accel.sh@21 -- # val= 00:06:21.802 14:01:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.802 14:01:24 -- accel/accel.sh@20 -- # IFS=: 00:06:21.802 14:01:24 -- accel/accel.sh@20 -- # read -r var val 00:06:21.802 14:01:24 -- accel/accel.sh@21 -- # val=0x1 00:06:21.802 14:01:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.802 14:01:24 -- accel/accel.sh@20 -- # IFS=: 00:06:21.802 14:01:24 -- accel/accel.sh@20 -- # read -r var val 00:06:21.802 14:01:24 -- accel/accel.sh@21 -- # val= 00:06:21.802 14:01:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.802 14:01:24 -- accel/accel.sh@20 -- # IFS=: 00:06:21.802 14:01:24 -- accel/accel.sh@20 -- # read -r var val 00:06:21.802 14:01:24 -- accel/accel.sh@21 -- # val= 00:06:21.802 14:01:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.802 14:01:24 -- accel/accel.sh@20 -- # IFS=: 00:06:21.802 14:01:24 -- accel/accel.sh@20 -- # read -r var val 00:06:21.802 14:01:24 -- accel/accel.sh@21 -- # val=copy 00:06:21.802 14:01:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.802 14:01:24 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:21.802 14:01:24 -- accel/accel.sh@20 -- # IFS=: 00:06:21.802 14:01:24 -- accel/accel.sh@20 -- # read -r var val 00:06:21.802 14:01:24 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:21.802 14:01:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.802 14:01:24 -- accel/accel.sh@20 -- # IFS=: 00:06:21.802 14:01:24 -- accel/accel.sh@20 -- # read -r var val 00:06:21.802 14:01:24 -- accel/accel.sh@21 -- # val= 00:06:21.802 14:01:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.802 14:01:24 -- accel/accel.sh@20 -- # IFS=: 00:06:21.802 14:01:24 -- accel/accel.sh@20 -- # read -r var val 00:06:21.802 14:01:24 -- accel/accel.sh@21 -- # val=software 00:06:21.802 14:01:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.802 14:01:24 -- accel/accel.sh@23 -- # accel_module=software 00:06:21.802 14:01:24 -- accel/accel.sh@20 -- # IFS=: 00:06:21.802 14:01:24 -- accel/accel.sh@20 -- # read -r var val 00:06:21.802 14:01:24 -- accel/accel.sh@21 -- # val=32 00:06:21.802 14:01:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.802 14:01:24 -- accel/accel.sh@20 -- # IFS=: 00:06:21.802 14:01:24 -- accel/accel.sh@20 -- # read -r var val 00:06:21.802 14:01:24 -- accel/accel.sh@21 -- # val=32 00:06:21.802 14:01:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.802 14:01:24 -- accel/accel.sh@20 -- # IFS=: 00:06:21.802 14:01:24 -- accel/accel.sh@20 -- # read -r var val 00:06:21.802 14:01:24 -- accel/accel.sh@21 -- # val=1 00:06:21.802 14:01:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.802 14:01:24 -- accel/accel.sh@20 -- # IFS=: 00:06:21.802 14:01:24 -- accel/accel.sh@20 -- # read -r var val 00:06:21.802 14:01:24 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:21.802 14:01:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.802 14:01:24 -- accel/accel.sh@20 -- # IFS=: 00:06:21.802 14:01:24 -- accel/accel.sh@20 -- # read -r var val 00:06:21.802 14:01:24 -- accel/accel.sh@21 -- # val=Yes 00:06:21.802 14:01:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.802 14:01:24 -- accel/accel.sh@20 -- # IFS=: 00:06:21.802 14:01:24 -- accel/accel.sh@20 -- # read -r var val 00:06:21.802 14:01:24 -- accel/accel.sh@21 -- # val= 00:06:21.802 14:01:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.802 14:01:24 -- accel/accel.sh@20 -- # IFS=: 00:06:21.802 14:01:24 -- accel/accel.sh@20 -- # read -r var val 00:06:21.802 14:01:24 -- accel/accel.sh@21 -- # val= 00:06:21.802 14:01:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.802 14:01:24 -- accel/accel.sh@20 -- # IFS=: 00:06:21.802 14:01:24 -- accel/accel.sh@20 -- # read -r var val 00:06:23.713 14:01:26 -- accel/accel.sh@21 -- # val= 00:06:23.713 14:01:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.713 14:01:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.713 14:01:26 -- accel/accel.sh@20 -- # read -r var val 00:06:23.713 14:01:26 -- accel/accel.sh@21 -- # val= 00:06:23.713 14:01:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.713 14:01:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.713 14:01:26 -- accel/accel.sh@20 -- # read -r var val 00:06:23.713 14:01:26 -- accel/accel.sh@21 -- # val= 00:06:23.713 14:01:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.713 14:01:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.713 14:01:26 -- accel/accel.sh@20 -- # read -r var val 00:06:23.713 14:01:26 -- accel/accel.sh@21 -- # val= 00:06:23.713 14:01:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.713 14:01:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.713 14:01:26 -- accel/accel.sh@20 -- # read -r var val 00:06:23.713 14:01:26 -- accel/accel.sh@21 -- # val= 00:06:23.713 14:01:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.713 14:01:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.713 14:01:26 -- accel/accel.sh@20 -- # read -r var val 00:06:23.713 14:01:26 -- accel/accel.sh@21 -- # val= 00:06:23.713 14:01:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.713 14:01:26 -- accel/accel.sh@20 -- # IFS=: 00:06:23.713 14:01:26 -- accel/accel.sh@20 -- # read -r var val 00:06:23.713 14:01:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:23.713 14:01:26 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:23.713 14:01:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.713 00:06:23.713 real 0m3.844s 00:06:23.713 user 0m3.407s 00:06:23.713 sys 0m0.231s 00:06:23.713 14:01:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:23.713 ************************************ 00:06:23.713 END TEST accel_copy 00:06:23.713 ************************************ 00:06:23.713 14:01:26 -- common/autotest_common.sh@10 -- # set +x 00:06:23.713 14:01:26 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:23.713 14:01:26 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:23.713 14:01:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:23.713 14:01:26 -- common/autotest_common.sh@10 -- # set +x 00:06:23.713 ************************************ 00:06:23.713 START TEST accel_fill 00:06:23.713 ************************************ 00:06:23.713 14:01:26 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:23.713 14:01:26 -- accel/accel.sh@16 -- # local accel_opc 00:06:23.714 14:01:26 -- accel/accel.sh@17 -- # local accel_module 00:06:23.714 14:01:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:23.714 14:01:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:23.714 14:01:26 -- accel/accel.sh@12 -- # build_accel_config 00:06:23.714 14:01:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:23.714 14:01:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.714 14:01:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.714 14:01:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:23.714 14:01:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:23.714 14:01:26 -- accel/accel.sh@41 -- # local IFS=, 00:06:23.714 14:01:26 -- accel/accel.sh@42 -- # jq -r . 00:06:23.714 [2024-12-08 14:01:26.278158] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:23.714 [2024-12-08 14:01:26.278236] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58718 ] 00:06:23.714 [2024-12-08 14:01:26.419431] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.714 [2024-12-08 14:01:26.557677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.614 14:01:28 -- accel/accel.sh@18 -- # out=' 00:06:25.614 SPDK Configuration: 00:06:25.614 Core mask: 0x1 00:06:25.614 00:06:25.614 Accel Perf Configuration: 00:06:25.614 Workload Type: fill 00:06:25.614 Fill pattern: 0x80 00:06:25.614 Transfer size: 4096 bytes 00:06:25.614 Vector count 1 00:06:25.614 Module: software 00:06:25.614 Queue depth: 64 00:06:25.614 Allocate depth: 64 00:06:25.614 # threads/core: 1 00:06:25.614 Run time: 1 seconds 00:06:25.614 Verify: Yes 00:06:25.614 00:06:25.614 Running for 1 seconds... 00:06:25.614 00:06:25.614 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:25.614 ------------------------------------------------------------------------------------ 00:06:25.614 0,0 527040/s 2058 MiB/s 0 0 00:06:25.614 ==================================================================================== 00:06:25.614 Total 527040/s 2058 MiB/s 0 0' 00:06:25.614 14:01:28 -- accel/accel.sh@20 -- # IFS=: 00:06:25.614 14:01:28 -- accel/accel.sh@20 -- # read -r var val 00:06:25.614 14:01:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:25.614 14:01:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:25.614 14:01:28 -- accel/accel.sh@12 -- # build_accel_config 00:06:25.614 14:01:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:25.614 14:01:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.614 14:01:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.615 14:01:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:25.615 14:01:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:25.615 14:01:28 -- accel/accel.sh@41 -- # local IFS=, 00:06:25.615 14:01:28 -- accel/accel.sh@42 -- # jq -r . 00:06:25.615 [2024-12-08 14:01:28.168959] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:25.615 [2024-12-08 14:01:28.169091] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58744 ] 00:06:25.615 [2024-12-08 14:01:28.315975] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.615 [2024-12-08 14:01:28.456845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.906 14:01:28 -- accel/accel.sh@21 -- # val= 00:06:25.906 14:01:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.906 14:01:28 -- accel/accel.sh@20 -- # IFS=: 00:06:25.906 14:01:28 -- accel/accel.sh@20 -- # read -r var val 00:06:25.906 14:01:28 -- accel/accel.sh@21 -- # val= 00:06:25.906 14:01:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.906 14:01:28 -- accel/accel.sh@20 -- # IFS=: 00:06:25.906 14:01:28 -- accel/accel.sh@20 -- # read -r var val 00:06:25.906 14:01:28 -- accel/accel.sh@21 -- # val=0x1 00:06:25.906 14:01:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.906 14:01:28 -- accel/accel.sh@20 -- # IFS=: 00:06:25.906 14:01:28 -- accel/accel.sh@20 -- # read -r var val 00:06:25.906 14:01:28 -- accel/accel.sh@21 -- # val= 00:06:25.906 14:01:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.906 14:01:28 -- accel/accel.sh@20 -- # IFS=: 00:06:25.906 14:01:28 -- accel/accel.sh@20 -- # read -r var val 00:06:25.906 14:01:28 -- accel/accel.sh@21 -- # val= 00:06:25.906 14:01:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.906 14:01:28 -- accel/accel.sh@20 -- # IFS=: 00:06:25.906 14:01:28 -- accel/accel.sh@20 -- # read -r var val 00:06:25.906 14:01:28 -- accel/accel.sh@21 -- # val=fill 00:06:25.906 14:01:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.906 14:01:28 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:25.906 14:01:28 -- accel/accel.sh@20 -- # IFS=: 00:06:25.906 14:01:28 -- accel/accel.sh@20 -- # read -r var val 00:06:25.906 14:01:28 -- accel/accel.sh@21 -- # val=0x80 00:06:25.906 14:01:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.906 14:01:28 -- accel/accel.sh@20 -- # IFS=: 00:06:25.906 14:01:28 -- accel/accel.sh@20 -- # read -r var val 00:06:25.906 14:01:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:25.906 14:01:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.906 14:01:28 -- accel/accel.sh@20 -- # IFS=: 00:06:25.906 14:01:28 -- accel/accel.sh@20 -- # read -r var val 00:06:25.906 14:01:28 -- accel/accel.sh@21 -- # val= 00:06:25.906 14:01:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.906 14:01:28 -- accel/accel.sh@20 -- # IFS=: 00:06:25.906 14:01:28 -- accel/accel.sh@20 -- # read -r var val 00:06:25.906 14:01:28 -- accel/accel.sh@21 -- # val=software 00:06:25.906 14:01:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.906 14:01:28 -- accel/accel.sh@23 -- # accel_module=software 00:06:25.906 14:01:28 -- accel/accel.sh@20 -- # IFS=: 00:06:25.906 14:01:28 -- accel/accel.sh@20 -- # read -r var val 00:06:25.906 14:01:28 -- accel/accel.sh@21 -- # val=64 00:06:25.906 14:01:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.906 14:01:28 -- accel/accel.sh@20 -- # IFS=: 00:06:25.906 14:01:28 -- accel/accel.sh@20 -- # read -r var val 00:06:25.907 14:01:28 -- accel/accel.sh@21 -- # val=64 00:06:25.907 14:01:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.907 14:01:28 -- accel/accel.sh@20 -- # IFS=: 00:06:25.907 14:01:28 -- accel/accel.sh@20 -- # read -r var val 00:06:25.907 14:01:28 -- accel/accel.sh@21 -- # val=1 00:06:25.907 14:01:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.907 14:01:28 -- accel/accel.sh@20 -- # IFS=: 00:06:25.907 14:01:28 -- accel/accel.sh@20 -- # read -r var val 00:06:25.907 14:01:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:25.907 14:01:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.907 14:01:28 -- accel/accel.sh@20 -- # IFS=: 00:06:25.907 14:01:28 -- accel/accel.sh@20 -- # read -r var val 00:06:25.907 14:01:28 -- accel/accel.sh@21 -- # val=Yes 00:06:25.907 14:01:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.907 14:01:28 -- accel/accel.sh@20 -- # IFS=: 00:06:25.907 14:01:28 -- accel/accel.sh@20 -- # read -r var val 00:06:25.907 14:01:28 -- accel/accel.sh@21 -- # val= 00:06:25.907 14:01:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.907 14:01:28 -- accel/accel.sh@20 -- # IFS=: 00:06:25.907 14:01:28 -- accel/accel.sh@20 -- # read -r var val 00:06:25.907 14:01:28 -- accel/accel.sh@21 -- # val= 00:06:25.907 14:01:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.907 14:01:28 -- accel/accel.sh@20 -- # IFS=: 00:06:25.907 14:01:28 -- accel/accel.sh@20 -- # read -r var val 00:06:27.279 14:01:30 -- accel/accel.sh@21 -- # val= 00:06:27.279 14:01:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.279 14:01:30 -- accel/accel.sh@20 -- # IFS=: 00:06:27.279 14:01:30 -- accel/accel.sh@20 -- # read -r var val 00:06:27.279 14:01:30 -- accel/accel.sh@21 -- # val= 00:06:27.279 14:01:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.279 14:01:30 -- accel/accel.sh@20 -- # IFS=: 00:06:27.279 14:01:30 -- accel/accel.sh@20 -- # read -r var val 00:06:27.279 14:01:30 -- accel/accel.sh@21 -- # val= 00:06:27.279 14:01:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.279 14:01:30 -- accel/accel.sh@20 -- # IFS=: 00:06:27.279 14:01:30 -- accel/accel.sh@20 -- # read -r var val 00:06:27.279 14:01:30 -- accel/accel.sh@21 -- # val= 00:06:27.279 14:01:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.279 14:01:30 -- accel/accel.sh@20 -- # IFS=: 00:06:27.279 14:01:30 -- accel/accel.sh@20 -- # read -r var val 00:06:27.279 14:01:30 -- accel/accel.sh@21 -- # val= 00:06:27.279 14:01:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.279 14:01:30 -- accel/accel.sh@20 -- # IFS=: 00:06:27.279 14:01:30 -- accel/accel.sh@20 -- # read -r var val 00:06:27.279 14:01:30 -- accel/accel.sh@21 -- # val= 00:06:27.279 14:01:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.279 14:01:30 -- accel/accel.sh@20 -- # IFS=: 00:06:27.279 14:01:30 -- accel/accel.sh@20 -- # read -r var val 00:06:27.279 ************************************ 00:06:27.279 END TEST accel_fill 00:06:27.279 ************************************ 00:06:27.279 14:01:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:27.279 14:01:30 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:27.279 14:01:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.279 00:06:27.279 real 0m3.795s 00:06:27.279 user 0m3.375s 00:06:27.279 sys 0m0.215s 00:06:27.279 14:01:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:27.279 14:01:30 -- common/autotest_common.sh@10 -- # set +x 00:06:27.279 14:01:30 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:27.279 14:01:30 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:27.279 14:01:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:27.279 14:01:30 -- common/autotest_common.sh@10 -- # set +x 00:06:27.279 ************************************ 00:06:27.279 START TEST accel_copy_crc32c 00:06:27.279 ************************************ 00:06:27.279 14:01:30 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:06:27.279 14:01:30 -- accel/accel.sh@16 -- # local accel_opc 00:06:27.279 14:01:30 -- accel/accel.sh@17 -- # local accel_module 00:06:27.279 14:01:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:27.279 14:01:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:27.279 14:01:30 -- accel/accel.sh@12 -- # build_accel_config 00:06:27.279 14:01:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:27.279 14:01:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.279 14:01:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.279 14:01:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:27.279 14:01:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:27.279 14:01:30 -- accel/accel.sh@41 -- # local IFS=, 00:06:27.279 14:01:30 -- accel/accel.sh@42 -- # jq -r . 00:06:27.279 [2024-12-08 14:01:30.126739] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:27.280 [2024-12-08 14:01:30.126865] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58785 ] 00:06:27.552 [2024-12-08 14:01:30.284503] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.856 [2024-12-08 14:01:30.470576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.766 14:01:32 -- accel/accel.sh@18 -- # out=' 00:06:29.766 SPDK Configuration: 00:06:29.766 Core mask: 0x1 00:06:29.766 00:06:29.766 Accel Perf Configuration: 00:06:29.766 Workload Type: copy_crc32c 00:06:29.766 CRC-32C seed: 0 00:06:29.766 Vector size: 4096 bytes 00:06:29.766 Transfer size: 4096 bytes 00:06:29.766 Vector count 1 00:06:29.766 Module: software 00:06:29.766 Queue depth: 32 00:06:29.766 Allocate depth: 32 00:06:29.766 # threads/core: 1 00:06:29.766 Run time: 1 seconds 00:06:29.766 Verify: Yes 00:06:29.766 00:06:29.766 Running for 1 seconds... 00:06:29.766 00:06:29.766 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:29.766 ------------------------------------------------------------------------------------ 00:06:29.766 0,0 237216/s 926 MiB/s 0 0 00:06:29.766 ==================================================================================== 00:06:29.766 Total 237216/s 926 MiB/s 0 0' 00:06:29.766 14:01:32 -- accel/accel.sh@20 -- # IFS=: 00:06:29.766 14:01:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:29.766 14:01:32 -- accel/accel.sh@20 -- # read -r var val 00:06:29.766 14:01:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:29.766 14:01:32 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.766 14:01:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:29.766 14:01:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.766 14:01:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.766 14:01:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:29.766 14:01:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:29.766 14:01:32 -- accel/accel.sh@41 -- # local IFS=, 00:06:29.766 14:01:32 -- accel/accel.sh@42 -- # jq -r . 00:06:29.766 [2024-12-08 14:01:32.235708] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:29.766 [2024-12-08 14:01:32.235813] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58811 ] 00:06:29.766 [2024-12-08 14:01:32.385524] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.766 [2024-12-08 14:01:32.568054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.027 14:01:32 -- accel/accel.sh@21 -- # val= 00:06:30.027 14:01:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.027 14:01:32 -- accel/accel.sh@20 -- # IFS=: 00:06:30.027 14:01:32 -- accel/accel.sh@20 -- # read -r var val 00:06:30.027 14:01:32 -- accel/accel.sh@21 -- # val= 00:06:30.027 14:01:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.027 14:01:32 -- accel/accel.sh@20 -- # IFS=: 00:06:30.027 14:01:32 -- accel/accel.sh@20 -- # read -r var val 00:06:30.027 14:01:32 -- accel/accel.sh@21 -- # val=0x1 00:06:30.027 14:01:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.027 14:01:32 -- accel/accel.sh@20 -- # IFS=: 00:06:30.027 14:01:32 -- accel/accel.sh@20 -- # read -r var val 00:06:30.027 14:01:32 -- accel/accel.sh@21 -- # val= 00:06:30.027 14:01:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.027 14:01:32 -- accel/accel.sh@20 -- # IFS=: 00:06:30.027 14:01:32 -- accel/accel.sh@20 -- # read -r var val 00:06:30.027 14:01:32 -- accel/accel.sh@21 -- # val= 00:06:30.027 14:01:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.027 14:01:32 -- accel/accel.sh@20 -- # IFS=: 00:06:30.027 14:01:32 -- accel/accel.sh@20 -- # read -r var val 00:06:30.027 14:01:32 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:30.027 14:01:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.027 14:01:32 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:30.027 14:01:32 -- accel/accel.sh@20 -- # IFS=: 00:06:30.027 14:01:32 -- accel/accel.sh@20 -- # read -r var val 00:06:30.027 14:01:32 -- accel/accel.sh@21 -- # val=0 00:06:30.027 14:01:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.027 14:01:32 -- accel/accel.sh@20 -- # IFS=: 00:06:30.027 14:01:32 -- accel/accel.sh@20 -- # read -r var val 00:06:30.027 14:01:32 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:30.027 14:01:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.027 14:01:32 -- accel/accel.sh@20 -- # IFS=: 00:06:30.027 14:01:32 -- accel/accel.sh@20 -- # read -r var val 00:06:30.027 14:01:32 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:30.027 14:01:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.027 14:01:32 -- accel/accel.sh@20 -- # IFS=: 00:06:30.027 14:01:32 -- accel/accel.sh@20 -- # read -r var val 00:06:30.027 14:01:32 -- accel/accel.sh@21 -- # val= 00:06:30.027 14:01:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.027 14:01:32 -- accel/accel.sh@20 -- # IFS=: 00:06:30.027 14:01:32 -- accel/accel.sh@20 -- # read -r var val 00:06:30.027 14:01:32 -- accel/accel.sh@21 -- # val=software 00:06:30.027 14:01:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.027 14:01:32 -- accel/accel.sh@23 -- # accel_module=software 00:06:30.027 14:01:32 -- accel/accel.sh@20 -- # IFS=: 00:06:30.027 14:01:32 -- accel/accel.sh@20 -- # read -r var val 00:06:30.027 14:01:32 -- accel/accel.sh@21 -- # val=32 00:06:30.027 14:01:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.027 14:01:32 -- accel/accel.sh@20 -- # IFS=: 00:06:30.027 14:01:32 -- accel/accel.sh@20 -- # read -r var val 00:06:30.027 14:01:32 -- accel/accel.sh@21 -- # val=32 00:06:30.027 14:01:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.027 14:01:32 -- accel/accel.sh@20 -- # IFS=: 00:06:30.027 14:01:32 -- accel/accel.sh@20 -- # read -r var val 00:06:30.027 14:01:32 -- accel/accel.sh@21 -- # val=1 00:06:30.027 14:01:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.027 14:01:32 -- accel/accel.sh@20 -- # IFS=: 00:06:30.027 14:01:32 -- accel/accel.sh@20 -- # read -r var val 00:06:30.027 14:01:32 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:30.027 14:01:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.027 14:01:32 -- accel/accel.sh@20 -- # IFS=: 00:06:30.027 14:01:32 -- accel/accel.sh@20 -- # read -r var val 00:06:30.027 14:01:32 -- accel/accel.sh@21 -- # val=Yes 00:06:30.027 14:01:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.027 14:01:32 -- accel/accel.sh@20 -- # IFS=: 00:06:30.027 14:01:32 -- accel/accel.sh@20 -- # read -r var val 00:06:30.027 14:01:32 -- accel/accel.sh@21 -- # val= 00:06:30.027 14:01:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.027 14:01:32 -- accel/accel.sh@20 -- # IFS=: 00:06:30.027 14:01:32 -- accel/accel.sh@20 -- # read -r var val 00:06:30.027 14:01:32 -- accel/accel.sh@21 -- # val= 00:06:30.027 14:01:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.027 14:01:32 -- accel/accel.sh@20 -- # IFS=: 00:06:30.027 14:01:32 -- accel/accel.sh@20 -- # read -r var val 00:06:31.412 14:01:34 -- accel/accel.sh@21 -- # val= 00:06:31.412 14:01:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.412 14:01:34 -- accel/accel.sh@20 -- # IFS=: 00:06:31.412 14:01:34 -- accel/accel.sh@20 -- # read -r var val 00:06:31.412 14:01:34 -- accel/accel.sh@21 -- # val= 00:06:31.412 14:01:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.412 14:01:34 -- accel/accel.sh@20 -- # IFS=: 00:06:31.412 14:01:34 -- accel/accel.sh@20 -- # read -r var val 00:06:31.412 14:01:34 -- accel/accel.sh@21 -- # val= 00:06:31.412 14:01:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.412 14:01:34 -- accel/accel.sh@20 -- # IFS=: 00:06:31.412 14:01:34 -- accel/accel.sh@20 -- # read -r var val 00:06:31.412 14:01:34 -- accel/accel.sh@21 -- # val= 00:06:31.412 14:01:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.412 14:01:34 -- accel/accel.sh@20 -- # IFS=: 00:06:31.412 14:01:34 -- accel/accel.sh@20 -- # read -r var val 00:06:31.412 14:01:34 -- accel/accel.sh@21 -- # val= 00:06:31.412 14:01:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.412 14:01:34 -- accel/accel.sh@20 -- # IFS=: 00:06:31.412 14:01:34 -- accel/accel.sh@20 -- # read -r var val 00:06:31.412 14:01:34 -- accel/accel.sh@21 -- # val= 00:06:31.412 14:01:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.412 14:01:34 -- accel/accel.sh@20 -- # IFS=: 00:06:31.412 14:01:34 -- accel/accel.sh@20 -- # read -r var val 00:06:31.412 14:01:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:31.412 14:01:34 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:31.412 14:01:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.412 00:06:31.412 real 0m4.100s 00:06:31.412 user 0m3.664s 00:06:31.412 sys 0m0.230s 00:06:31.412 ************************************ 00:06:31.412 END TEST accel_copy_crc32c 00:06:31.412 ************************************ 00:06:31.412 14:01:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:31.412 14:01:34 -- common/autotest_common.sh@10 -- # set +x 00:06:31.412 14:01:34 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:31.412 14:01:34 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:31.412 14:01:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:31.412 14:01:34 -- common/autotest_common.sh@10 -- # set +x 00:06:31.412 ************************************ 00:06:31.412 START TEST accel_copy_crc32c_C2 00:06:31.412 ************************************ 00:06:31.412 14:01:34 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:31.412 14:01:34 -- accel/accel.sh@16 -- # local accel_opc 00:06:31.412 14:01:34 -- accel/accel.sh@17 -- # local accel_module 00:06:31.412 14:01:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:31.412 14:01:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:31.412 14:01:34 -- accel/accel.sh@12 -- # build_accel_config 00:06:31.412 14:01:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:31.412 14:01:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.412 14:01:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.412 14:01:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:31.412 14:01:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:31.412 14:01:34 -- accel/accel.sh@41 -- # local IFS=, 00:06:31.412 14:01:34 -- accel/accel.sh@42 -- # jq -r . 00:06:31.412 [2024-12-08 14:01:34.279471] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:31.412 [2024-12-08 14:01:34.279550] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58852 ] 00:06:31.673 [2024-12-08 14:01:34.415042] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.934 [2024-12-08 14:01:34.618024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.837 14:01:36 -- accel/accel.sh@18 -- # out=' 00:06:33.837 SPDK Configuration: 00:06:33.837 Core mask: 0x1 00:06:33.837 00:06:33.837 Accel Perf Configuration: 00:06:33.837 Workload Type: copy_crc32c 00:06:33.837 CRC-32C seed: 0 00:06:33.837 Vector size: 4096 bytes 00:06:33.837 Transfer size: 8192 bytes 00:06:33.837 Vector count 2 00:06:33.837 Module: software 00:06:33.837 Queue depth: 32 00:06:33.837 Allocate depth: 32 00:06:33.837 # threads/core: 1 00:06:33.837 Run time: 1 seconds 00:06:33.837 Verify: Yes 00:06:33.837 00:06:33.837 Running for 1 seconds... 00:06:33.837 00:06:33.837 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:33.837 ------------------------------------------------------------------------------------ 00:06:33.837 0,0 175904/s 1374 MiB/s 0 0 00:06:33.837 ==================================================================================== 00:06:33.837 Total 175904/s 687 MiB/s 0 0' 00:06:33.837 14:01:36 -- accel/accel.sh@20 -- # IFS=: 00:06:33.837 14:01:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:33.837 14:01:36 -- accel/accel.sh@20 -- # read -r var val 00:06:33.837 14:01:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:33.837 14:01:36 -- accel/accel.sh@12 -- # build_accel_config 00:06:33.837 14:01:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:33.837 14:01:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.837 14:01:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.837 14:01:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:33.837 14:01:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:33.837 14:01:36 -- accel/accel.sh@41 -- # local IFS=, 00:06:33.837 14:01:36 -- accel/accel.sh@42 -- # jq -r . 00:06:33.837 [2024-12-08 14:01:36.497139] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:33.837 [2024-12-08 14:01:36.497242] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58878 ] 00:06:33.837 [2024-12-08 14:01:36.641310] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.098 [2024-12-08 14:01:36.828090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.098 14:01:36 -- accel/accel.sh@21 -- # val= 00:06:34.098 14:01:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.098 14:01:36 -- accel/accel.sh@20 -- # IFS=: 00:06:34.098 14:01:36 -- accel/accel.sh@20 -- # read -r var val 00:06:34.098 14:01:36 -- accel/accel.sh@21 -- # val= 00:06:34.098 14:01:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.098 14:01:36 -- accel/accel.sh@20 -- # IFS=: 00:06:34.098 14:01:36 -- accel/accel.sh@20 -- # read -r var val 00:06:34.098 14:01:36 -- accel/accel.sh@21 -- # val=0x1 00:06:34.098 14:01:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.098 14:01:36 -- accel/accel.sh@20 -- # IFS=: 00:06:34.098 14:01:36 -- accel/accel.sh@20 -- # read -r var val 00:06:34.098 14:01:36 -- accel/accel.sh@21 -- # val= 00:06:34.098 14:01:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.098 14:01:36 -- accel/accel.sh@20 -- # IFS=: 00:06:34.098 14:01:36 -- accel/accel.sh@20 -- # read -r var val 00:06:34.098 14:01:36 -- accel/accel.sh@21 -- # val= 00:06:34.098 14:01:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.098 14:01:36 -- accel/accel.sh@20 -- # IFS=: 00:06:34.098 14:01:36 -- accel/accel.sh@20 -- # read -r var val 00:06:34.098 14:01:36 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:34.098 14:01:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.098 14:01:36 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:34.098 14:01:36 -- accel/accel.sh@20 -- # IFS=: 00:06:34.098 14:01:36 -- accel/accel.sh@20 -- # read -r var val 00:06:34.098 14:01:36 -- accel/accel.sh@21 -- # val=0 00:06:34.098 14:01:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.098 14:01:36 -- accel/accel.sh@20 -- # IFS=: 00:06:34.098 14:01:36 -- accel/accel.sh@20 -- # read -r var val 00:06:34.098 14:01:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:34.098 14:01:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.098 14:01:36 -- accel/accel.sh@20 -- # IFS=: 00:06:34.098 14:01:36 -- accel/accel.sh@20 -- # read -r var val 00:06:34.098 14:01:36 -- accel/accel.sh@21 -- # val='8192 bytes' 00:06:34.098 14:01:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.098 14:01:36 -- accel/accel.sh@20 -- # IFS=: 00:06:34.098 14:01:36 -- accel/accel.sh@20 -- # read -r var val 00:06:34.098 14:01:36 -- accel/accel.sh@21 -- # val= 00:06:34.098 14:01:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.098 14:01:36 -- accel/accel.sh@20 -- # IFS=: 00:06:34.098 14:01:36 -- accel/accel.sh@20 -- # read -r var val 00:06:34.098 14:01:36 -- accel/accel.sh@21 -- # val=software 00:06:34.098 14:01:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.098 14:01:36 -- accel/accel.sh@23 -- # accel_module=software 00:06:34.098 14:01:36 -- accel/accel.sh@20 -- # IFS=: 00:06:34.098 14:01:36 -- accel/accel.sh@20 -- # read -r var val 00:06:34.098 14:01:36 -- accel/accel.sh@21 -- # val=32 00:06:34.098 14:01:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.098 14:01:36 -- accel/accel.sh@20 -- # IFS=: 00:06:34.098 14:01:36 -- accel/accel.sh@20 -- # read -r var val 00:06:34.098 14:01:36 -- accel/accel.sh@21 -- # val=32 00:06:34.098 14:01:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.098 14:01:36 -- accel/accel.sh@20 -- # IFS=: 00:06:34.098 14:01:36 -- accel/accel.sh@20 -- # read -r var val 00:06:34.098 14:01:36 -- accel/accel.sh@21 -- # val=1 00:06:34.098 14:01:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.098 14:01:36 -- accel/accel.sh@20 -- # IFS=: 00:06:34.098 14:01:36 -- accel/accel.sh@20 -- # read -r var val 00:06:34.098 14:01:36 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:34.098 14:01:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.098 14:01:36 -- accel/accel.sh@20 -- # IFS=: 00:06:34.098 14:01:36 -- accel/accel.sh@20 -- # read -r var val 00:06:34.098 14:01:36 -- accel/accel.sh@21 -- # val=Yes 00:06:34.098 14:01:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.098 14:01:36 -- accel/accel.sh@20 -- # IFS=: 00:06:34.098 14:01:36 -- accel/accel.sh@20 -- # read -r var val 00:06:34.098 14:01:36 -- accel/accel.sh@21 -- # val= 00:06:34.098 14:01:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.098 14:01:36 -- accel/accel.sh@20 -- # IFS=: 00:06:34.098 14:01:36 -- accel/accel.sh@20 -- # read -r var val 00:06:34.098 14:01:36 -- accel/accel.sh@21 -- # val= 00:06:34.098 14:01:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.098 14:01:36 -- accel/accel.sh@20 -- # IFS=: 00:06:34.098 14:01:36 -- accel/accel.sh@20 -- # read -r var val 00:06:36.002 14:01:38 -- accel/accel.sh@21 -- # val= 00:06:36.002 14:01:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.002 14:01:38 -- accel/accel.sh@20 -- # IFS=: 00:06:36.002 14:01:38 -- accel/accel.sh@20 -- # read -r var val 00:06:36.002 14:01:38 -- accel/accel.sh@21 -- # val= 00:06:36.002 14:01:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.003 14:01:38 -- accel/accel.sh@20 -- # IFS=: 00:06:36.003 14:01:38 -- accel/accel.sh@20 -- # read -r var val 00:06:36.003 14:01:38 -- accel/accel.sh@21 -- # val= 00:06:36.003 14:01:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.003 14:01:38 -- accel/accel.sh@20 -- # IFS=: 00:06:36.003 14:01:38 -- accel/accel.sh@20 -- # read -r var val 00:06:36.003 14:01:38 -- accel/accel.sh@21 -- # val= 00:06:36.003 14:01:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.003 14:01:38 -- accel/accel.sh@20 -- # IFS=: 00:06:36.003 14:01:38 -- accel/accel.sh@20 -- # read -r var val 00:06:36.003 14:01:38 -- accel/accel.sh@21 -- # val= 00:06:36.003 14:01:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.003 14:01:38 -- accel/accel.sh@20 -- # IFS=: 00:06:36.003 14:01:38 -- accel/accel.sh@20 -- # read -r var val 00:06:36.003 14:01:38 -- accel/accel.sh@21 -- # val= 00:06:36.003 14:01:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.003 14:01:38 -- accel/accel.sh@20 -- # IFS=: 00:06:36.003 14:01:38 -- accel/accel.sh@20 -- # read -r var val 00:06:36.003 14:01:38 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:36.003 14:01:38 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:36.003 14:01:38 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:36.003 00:06:36.003 real 0m4.211s 00:06:36.003 user 0m3.727s 00:06:36.003 sys 0m0.274s 00:06:36.003 14:01:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:36.003 ************************************ 00:06:36.003 END TEST accel_copy_crc32c_C2 00:06:36.003 ************************************ 00:06:36.003 14:01:38 -- common/autotest_common.sh@10 -- # set +x 00:06:36.003 14:01:38 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:36.003 14:01:38 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:36.003 14:01:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:36.003 14:01:38 -- common/autotest_common.sh@10 -- # set +x 00:06:36.003 ************************************ 00:06:36.003 START TEST accel_dualcast 00:06:36.003 ************************************ 00:06:36.003 14:01:38 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:06:36.003 14:01:38 -- accel/accel.sh@16 -- # local accel_opc 00:06:36.003 14:01:38 -- accel/accel.sh@17 -- # local accel_module 00:06:36.003 14:01:38 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:06:36.003 14:01:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:36.003 14:01:38 -- accel/accel.sh@12 -- # build_accel_config 00:06:36.003 14:01:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:36.003 14:01:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.003 14:01:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.003 14:01:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:36.003 14:01:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:36.003 14:01:38 -- accel/accel.sh@41 -- # local IFS=, 00:06:36.003 14:01:38 -- accel/accel.sh@42 -- # jq -r . 00:06:36.003 [2024-12-08 14:01:38.540023] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:36.003 [2024-12-08 14:01:38.540129] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58919 ] 00:06:36.003 [2024-12-08 14:01:38.687641] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.003 [2024-12-08 14:01:38.825975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.905 14:01:40 -- accel/accel.sh@18 -- # out=' 00:06:37.905 SPDK Configuration: 00:06:37.905 Core mask: 0x1 00:06:37.905 00:06:37.905 Accel Perf Configuration: 00:06:37.905 Workload Type: dualcast 00:06:37.905 Transfer size: 4096 bytes 00:06:37.905 Vector count 1 00:06:37.905 Module: software 00:06:37.905 Queue depth: 32 00:06:37.905 Allocate depth: 32 00:06:37.905 # threads/core: 1 00:06:37.905 Run time: 1 seconds 00:06:37.905 Verify: Yes 00:06:37.905 00:06:37.905 Running for 1 seconds... 00:06:37.905 00:06:37.905 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:37.905 ------------------------------------------------------------------------------------ 00:06:37.905 0,0 441600/s 1725 MiB/s 0 0 00:06:37.905 ==================================================================================== 00:06:37.905 Total 441600/s 1725 MiB/s 0 0' 00:06:37.905 14:01:40 -- accel/accel.sh@20 -- # IFS=: 00:06:37.905 14:01:40 -- accel/accel.sh@20 -- # read -r var val 00:06:37.905 14:01:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:37.905 14:01:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:37.905 14:01:40 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.905 14:01:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:37.905 14:01:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.905 14:01:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.905 14:01:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:37.905 14:01:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:37.905 14:01:40 -- accel/accel.sh@41 -- # local IFS=, 00:06:37.905 14:01:40 -- accel/accel.sh@42 -- # jq -r . 00:06:37.905 [2024-12-08 14:01:40.445632] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:37.905 [2024-12-08 14:01:40.445734] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58945 ] 00:06:37.905 [2024-12-08 14:01:40.597334] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.167 [2024-12-08 14:01:40.850752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.167 14:01:41 -- accel/accel.sh@21 -- # val= 00:06:38.167 14:01:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.167 14:01:41 -- accel/accel.sh@20 -- # IFS=: 00:06:38.167 14:01:41 -- accel/accel.sh@20 -- # read -r var val 00:06:38.167 14:01:41 -- accel/accel.sh@21 -- # val= 00:06:38.167 14:01:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.167 14:01:41 -- accel/accel.sh@20 -- # IFS=: 00:06:38.167 14:01:41 -- accel/accel.sh@20 -- # read -r var val 00:06:38.167 14:01:41 -- accel/accel.sh@21 -- # val=0x1 00:06:38.167 14:01:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.167 14:01:41 -- accel/accel.sh@20 -- # IFS=: 00:06:38.167 14:01:41 -- accel/accel.sh@20 -- # read -r var val 00:06:38.167 14:01:41 -- accel/accel.sh@21 -- # val= 00:06:38.167 14:01:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.167 14:01:41 -- accel/accel.sh@20 -- # IFS=: 00:06:38.167 14:01:41 -- accel/accel.sh@20 -- # read -r var val 00:06:38.167 14:01:41 -- accel/accel.sh@21 -- # val= 00:06:38.167 14:01:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.167 14:01:41 -- accel/accel.sh@20 -- # IFS=: 00:06:38.167 14:01:41 -- accel/accel.sh@20 -- # read -r var val 00:06:38.167 14:01:41 -- accel/accel.sh@21 -- # val=dualcast 00:06:38.167 14:01:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.167 14:01:41 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:06:38.167 14:01:41 -- accel/accel.sh@20 -- # IFS=: 00:06:38.167 14:01:41 -- accel/accel.sh@20 -- # read -r var val 00:06:38.167 14:01:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:38.167 14:01:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.167 14:01:41 -- accel/accel.sh@20 -- # IFS=: 00:06:38.167 14:01:41 -- accel/accel.sh@20 -- # read -r var val 00:06:38.167 14:01:41 -- accel/accel.sh@21 -- # val= 00:06:38.167 14:01:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.167 14:01:41 -- accel/accel.sh@20 -- # IFS=: 00:06:38.167 14:01:41 -- accel/accel.sh@20 -- # read -r var val 00:06:38.167 14:01:41 -- accel/accel.sh@21 -- # val=software 00:06:38.167 14:01:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.167 14:01:41 -- accel/accel.sh@23 -- # accel_module=software 00:06:38.167 14:01:41 -- accel/accel.sh@20 -- # IFS=: 00:06:38.167 14:01:41 -- accel/accel.sh@20 -- # read -r var val 00:06:38.167 14:01:41 -- accel/accel.sh@21 -- # val=32 00:06:38.167 14:01:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.167 14:01:41 -- accel/accel.sh@20 -- # IFS=: 00:06:38.167 14:01:41 -- accel/accel.sh@20 -- # read -r var val 00:06:38.167 14:01:41 -- accel/accel.sh@21 -- # val=32 00:06:38.167 14:01:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.167 14:01:41 -- accel/accel.sh@20 -- # IFS=: 00:06:38.167 14:01:41 -- accel/accel.sh@20 -- # read -r var val 00:06:38.167 14:01:41 -- accel/accel.sh@21 -- # val=1 00:06:38.167 14:01:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.167 14:01:41 -- accel/accel.sh@20 -- # IFS=: 00:06:38.167 14:01:41 -- accel/accel.sh@20 -- # read -r var val 00:06:38.167 14:01:41 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:38.167 14:01:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.167 14:01:41 -- accel/accel.sh@20 -- # IFS=: 00:06:38.167 14:01:41 -- accel/accel.sh@20 -- # read -r var val 00:06:38.167 14:01:41 -- accel/accel.sh@21 -- # val=Yes 00:06:38.167 14:01:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.167 14:01:41 -- accel/accel.sh@20 -- # IFS=: 00:06:38.167 14:01:41 -- accel/accel.sh@20 -- # read -r var val 00:06:38.167 14:01:41 -- accel/accel.sh@21 -- # val= 00:06:38.167 14:01:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.167 14:01:41 -- accel/accel.sh@20 -- # IFS=: 00:06:38.167 14:01:41 -- accel/accel.sh@20 -- # read -r var val 00:06:38.167 14:01:41 -- accel/accel.sh@21 -- # val= 00:06:38.167 14:01:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.167 14:01:41 -- accel/accel.sh@20 -- # IFS=: 00:06:38.167 14:01:41 -- accel/accel.sh@20 -- # read -r var val 00:06:40.085 14:01:42 -- accel/accel.sh@21 -- # val= 00:06:40.085 14:01:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.085 14:01:42 -- accel/accel.sh@20 -- # IFS=: 00:06:40.085 14:01:42 -- accel/accel.sh@20 -- # read -r var val 00:06:40.085 14:01:42 -- accel/accel.sh@21 -- # val= 00:06:40.085 14:01:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.085 14:01:42 -- accel/accel.sh@20 -- # IFS=: 00:06:40.085 14:01:42 -- accel/accel.sh@20 -- # read -r var val 00:06:40.085 14:01:42 -- accel/accel.sh@21 -- # val= 00:06:40.085 14:01:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.085 14:01:42 -- accel/accel.sh@20 -- # IFS=: 00:06:40.085 14:01:42 -- accel/accel.sh@20 -- # read -r var val 00:06:40.085 14:01:42 -- accel/accel.sh@21 -- # val= 00:06:40.085 14:01:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.085 14:01:42 -- accel/accel.sh@20 -- # IFS=: 00:06:40.085 14:01:42 -- accel/accel.sh@20 -- # read -r var val 00:06:40.085 14:01:42 -- accel/accel.sh@21 -- # val= 00:06:40.085 14:01:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.085 14:01:42 -- accel/accel.sh@20 -- # IFS=: 00:06:40.085 14:01:42 -- accel/accel.sh@20 -- # read -r var val 00:06:40.085 14:01:42 -- accel/accel.sh@21 -- # val= 00:06:40.085 14:01:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.085 14:01:42 -- accel/accel.sh@20 -- # IFS=: 00:06:40.085 14:01:42 -- accel/accel.sh@20 -- # read -r var val 00:06:40.085 14:01:42 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:40.085 14:01:42 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:06:40.085 14:01:42 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.085 00:06:40.085 real 0m4.181s 00:06:40.085 user 0m3.682s 00:06:40.085 sys 0m0.281s 00:06:40.085 14:01:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:40.085 14:01:42 -- common/autotest_common.sh@10 -- # set +x 00:06:40.085 ************************************ 00:06:40.085 END TEST accel_dualcast 00:06:40.085 ************************************ 00:06:40.085 14:01:42 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:40.085 14:01:42 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:40.085 14:01:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:40.085 14:01:42 -- common/autotest_common.sh@10 -- # set +x 00:06:40.085 ************************************ 00:06:40.085 START TEST accel_compare 00:06:40.085 ************************************ 00:06:40.085 14:01:42 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:06:40.085 14:01:42 -- accel/accel.sh@16 -- # local accel_opc 00:06:40.085 14:01:42 -- accel/accel.sh@17 -- # local accel_module 00:06:40.085 14:01:42 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:06:40.085 14:01:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:40.085 14:01:42 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.085 14:01:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:40.085 14:01:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.085 14:01:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.085 14:01:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:40.085 14:01:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:40.085 14:01:42 -- accel/accel.sh@41 -- # local IFS=, 00:06:40.085 14:01:42 -- accel/accel.sh@42 -- # jq -r . 00:06:40.085 [2024-12-08 14:01:42.802754] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:40.085 [2024-12-08 14:01:42.802922] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58986 ] 00:06:40.085 [2024-12-08 14:01:42.957318] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.345 [2024-12-08 14:01:43.184888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.240 14:01:44 -- accel/accel.sh@18 -- # out=' 00:06:42.240 SPDK Configuration: 00:06:42.240 Core mask: 0x1 00:06:42.240 00:06:42.240 Accel Perf Configuration: 00:06:42.240 Workload Type: compare 00:06:42.240 Transfer size: 4096 bytes 00:06:42.240 Vector count 1 00:06:42.240 Module: software 00:06:42.240 Queue depth: 32 00:06:42.240 Allocate depth: 32 00:06:42.240 # threads/core: 1 00:06:42.240 Run time: 1 seconds 00:06:42.240 Verify: Yes 00:06:42.240 00:06:42.240 Running for 1 seconds... 00:06:42.240 00:06:42.240 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:42.240 ------------------------------------------------------------------------------------ 00:06:42.240 0,0 427328/s 1669 MiB/s 0 0 00:06:42.240 ==================================================================================== 00:06:42.240 Total 427328/s 1669 MiB/s 0 0' 00:06:42.240 14:01:44 -- accel/accel.sh@20 -- # IFS=: 00:06:42.240 14:01:44 -- accel/accel.sh@20 -- # read -r var val 00:06:42.240 14:01:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:42.240 14:01:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:42.240 14:01:44 -- accel/accel.sh@12 -- # build_accel_config 00:06:42.240 14:01:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:42.240 14:01:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.240 14:01:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.240 14:01:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:42.240 14:01:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:42.240 14:01:44 -- accel/accel.sh@41 -- # local IFS=, 00:06:42.240 14:01:44 -- accel/accel.sh@42 -- # jq -r . 00:06:42.240 [2024-12-08 14:01:44.938223] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:42.240 [2024-12-08 14:01:44.938348] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59020 ] 00:06:42.240 [2024-12-08 14:01:45.087331] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.498 [2024-12-08 14:01:45.235893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.498 14:01:45 -- accel/accel.sh@21 -- # val= 00:06:42.498 14:01:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.498 14:01:45 -- accel/accel.sh@20 -- # IFS=: 00:06:42.498 14:01:45 -- accel/accel.sh@20 -- # read -r var val 00:06:42.498 14:01:45 -- accel/accel.sh@21 -- # val= 00:06:42.498 14:01:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.498 14:01:45 -- accel/accel.sh@20 -- # IFS=: 00:06:42.498 14:01:45 -- accel/accel.sh@20 -- # read -r var val 00:06:42.498 14:01:45 -- accel/accel.sh@21 -- # val=0x1 00:06:42.498 14:01:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.498 14:01:45 -- accel/accel.sh@20 -- # IFS=: 00:06:42.498 14:01:45 -- accel/accel.sh@20 -- # read -r var val 00:06:42.498 14:01:45 -- accel/accel.sh@21 -- # val= 00:06:42.498 14:01:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.498 14:01:45 -- accel/accel.sh@20 -- # IFS=: 00:06:42.498 14:01:45 -- accel/accel.sh@20 -- # read -r var val 00:06:42.498 14:01:45 -- accel/accel.sh@21 -- # val= 00:06:42.498 14:01:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.498 14:01:45 -- accel/accel.sh@20 -- # IFS=: 00:06:42.498 14:01:45 -- accel/accel.sh@20 -- # read -r var val 00:06:42.498 14:01:45 -- accel/accel.sh@21 -- # val=compare 00:06:42.498 14:01:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.498 14:01:45 -- accel/accel.sh@24 -- # accel_opc=compare 00:06:42.498 14:01:45 -- accel/accel.sh@20 -- # IFS=: 00:06:42.498 14:01:45 -- accel/accel.sh@20 -- # read -r var val 00:06:42.498 14:01:45 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:42.498 14:01:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.498 14:01:45 -- accel/accel.sh@20 -- # IFS=: 00:06:42.498 14:01:45 -- accel/accel.sh@20 -- # read -r var val 00:06:42.498 14:01:45 -- accel/accel.sh@21 -- # val= 00:06:42.498 14:01:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.498 14:01:45 -- accel/accel.sh@20 -- # IFS=: 00:06:42.498 14:01:45 -- accel/accel.sh@20 -- # read -r var val 00:06:42.498 14:01:45 -- accel/accel.sh@21 -- # val=software 00:06:42.498 14:01:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.498 14:01:45 -- accel/accel.sh@23 -- # accel_module=software 00:06:42.498 14:01:45 -- accel/accel.sh@20 -- # IFS=: 00:06:42.498 14:01:45 -- accel/accel.sh@20 -- # read -r var val 00:06:42.498 14:01:45 -- accel/accel.sh@21 -- # val=32 00:06:42.498 14:01:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.498 14:01:45 -- accel/accel.sh@20 -- # IFS=: 00:06:42.498 14:01:45 -- accel/accel.sh@20 -- # read -r var val 00:06:42.498 14:01:45 -- accel/accel.sh@21 -- # val=32 00:06:42.498 14:01:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.498 14:01:45 -- accel/accel.sh@20 -- # IFS=: 00:06:42.498 14:01:45 -- accel/accel.sh@20 -- # read -r var val 00:06:42.498 14:01:45 -- accel/accel.sh@21 -- # val=1 00:06:42.498 14:01:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.498 14:01:45 -- accel/accel.sh@20 -- # IFS=: 00:06:42.498 14:01:45 -- accel/accel.sh@20 -- # read -r var val 00:06:42.498 14:01:45 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:42.498 14:01:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.498 14:01:45 -- accel/accel.sh@20 -- # IFS=: 00:06:42.498 14:01:45 -- accel/accel.sh@20 -- # read -r var val 00:06:42.498 14:01:45 -- accel/accel.sh@21 -- # val=Yes 00:06:42.498 14:01:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.498 14:01:45 -- accel/accel.sh@20 -- # IFS=: 00:06:42.498 14:01:45 -- accel/accel.sh@20 -- # read -r var val 00:06:42.498 14:01:45 -- accel/accel.sh@21 -- # val= 00:06:42.498 14:01:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.498 14:01:45 -- accel/accel.sh@20 -- # IFS=: 00:06:42.498 14:01:45 -- accel/accel.sh@20 -- # read -r var val 00:06:42.498 14:01:45 -- accel/accel.sh@21 -- # val= 00:06:42.498 14:01:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.498 14:01:45 -- accel/accel.sh@20 -- # IFS=: 00:06:42.498 14:01:45 -- accel/accel.sh@20 -- # read -r var val 00:06:44.409 14:01:46 -- accel/accel.sh@21 -- # val= 00:06:44.409 14:01:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.409 14:01:46 -- accel/accel.sh@20 -- # IFS=: 00:06:44.409 14:01:46 -- accel/accel.sh@20 -- # read -r var val 00:06:44.409 14:01:46 -- accel/accel.sh@21 -- # val= 00:06:44.409 14:01:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.409 14:01:46 -- accel/accel.sh@20 -- # IFS=: 00:06:44.409 14:01:46 -- accel/accel.sh@20 -- # read -r var val 00:06:44.409 14:01:46 -- accel/accel.sh@21 -- # val= 00:06:44.409 14:01:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.409 14:01:46 -- accel/accel.sh@20 -- # IFS=: 00:06:44.409 14:01:46 -- accel/accel.sh@20 -- # read -r var val 00:06:44.409 14:01:46 -- accel/accel.sh@21 -- # val= 00:06:44.409 14:01:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.409 14:01:46 -- accel/accel.sh@20 -- # IFS=: 00:06:44.409 14:01:46 -- accel/accel.sh@20 -- # read -r var val 00:06:44.409 14:01:46 -- accel/accel.sh@21 -- # val= 00:06:44.409 14:01:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.409 14:01:46 -- accel/accel.sh@20 -- # IFS=: 00:06:44.409 14:01:46 -- accel/accel.sh@20 -- # read -r var val 00:06:44.409 14:01:46 -- accel/accel.sh@21 -- # val= 00:06:44.409 14:01:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.409 14:01:46 -- accel/accel.sh@20 -- # IFS=: 00:06:44.409 14:01:46 -- accel/accel.sh@20 -- # read -r var val 00:06:44.409 14:01:46 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:44.409 14:01:46 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:06:44.409 14:01:46 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:44.409 00:06:44.409 real 0m4.082s 00:06:44.409 user 0m3.587s 00:06:44.409 sys 0m0.280s 00:06:44.409 14:01:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:44.409 14:01:46 -- common/autotest_common.sh@10 -- # set +x 00:06:44.409 ************************************ 00:06:44.409 END TEST accel_compare 00:06:44.409 ************************************ 00:06:44.409 14:01:46 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:44.409 14:01:46 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:44.409 14:01:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:44.409 14:01:46 -- common/autotest_common.sh@10 -- # set +x 00:06:44.409 ************************************ 00:06:44.409 START TEST accel_xor 00:06:44.409 ************************************ 00:06:44.409 14:01:46 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:06:44.409 14:01:46 -- accel/accel.sh@16 -- # local accel_opc 00:06:44.409 14:01:46 -- accel/accel.sh@17 -- # local accel_module 00:06:44.409 14:01:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:06:44.409 14:01:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:44.409 14:01:46 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.409 14:01:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:44.409 14:01:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.409 14:01:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.409 14:01:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:44.409 14:01:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:44.409 14:01:46 -- accel/accel.sh@41 -- # local IFS=, 00:06:44.409 14:01:46 -- accel/accel.sh@42 -- # jq -r . 00:06:44.409 [2024-12-08 14:01:46.903846] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:44.409 [2024-12-08 14:01:46.903961] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59062 ] 00:06:44.409 [2024-12-08 14:01:47.053095] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.409 [2024-12-08 14:01:47.224507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.313 14:01:48 -- accel/accel.sh@18 -- # out=' 00:06:46.313 SPDK Configuration: 00:06:46.313 Core mask: 0x1 00:06:46.313 00:06:46.313 Accel Perf Configuration: 00:06:46.313 Workload Type: xor 00:06:46.313 Source buffers: 2 00:06:46.313 Transfer size: 4096 bytes 00:06:46.313 Vector count 1 00:06:46.313 Module: software 00:06:46.313 Queue depth: 32 00:06:46.313 Allocate depth: 32 00:06:46.313 # threads/core: 1 00:06:46.313 Run time: 1 seconds 00:06:46.313 Verify: Yes 00:06:46.313 00:06:46.313 Running for 1 seconds... 00:06:46.313 00:06:46.313 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:46.313 ------------------------------------------------------------------------------------ 00:06:46.313 0,0 340896/s 1331 MiB/s 0 0 00:06:46.313 ==================================================================================== 00:06:46.313 Total 340896/s 1331 MiB/s 0 0' 00:06:46.313 14:01:48 -- accel/accel.sh@20 -- # IFS=: 00:06:46.313 14:01:48 -- accel/accel.sh@20 -- # read -r var val 00:06:46.313 14:01:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:46.313 14:01:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:46.313 14:01:48 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.313 14:01:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:46.313 14:01:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.313 14:01:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.313 14:01:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:46.313 14:01:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:46.313 14:01:48 -- accel/accel.sh@41 -- # local IFS=, 00:06:46.314 14:01:48 -- accel/accel.sh@42 -- # jq -r . 00:06:46.314 [2024-12-08 14:01:49.027878] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:46.314 [2024-12-08 14:01:49.028023] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59089 ] 00:06:46.314 [2024-12-08 14:01:49.180852] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.571 [2024-12-08 14:01:49.369411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.829 14:01:49 -- accel/accel.sh@21 -- # val= 00:06:46.829 14:01:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.829 14:01:49 -- accel/accel.sh@20 -- # IFS=: 00:06:46.829 14:01:49 -- accel/accel.sh@20 -- # read -r var val 00:06:46.829 14:01:49 -- accel/accel.sh@21 -- # val= 00:06:46.829 14:01:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.829 14:01:49 -- accel/accel.sh@20 -- # IFS=: 00:06:46.829 14:01:49 -- accel/accel.sh@20 -- # read -r var val 00:06:46.829 14:01:49 -- accel/accel.sh@21 -- # val=0x1 00:06:46.829 14:01:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.829 14:01:49 -- accel/accel.sh@20 -- # IFS=: 00:06:46.829 14:01:49 -- accel/accel.sh@20 -- # read -r var val 00:06:46.829 14:01:49 -- accel/accel.sh@21 -- # val= 00:06:46.829 14:01:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.829 14:01:49 -- accel/accel.sh@20 -- # IFS=: 00:06:46.829 14:01:49 -- accel/accel.sh@20 -- # read -r var val 00:06:46.829 14:01:49 -- accel/accel.sh@21 -- # val= 00:06:46.829 14:01:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.829 14:01:49 -- accel/accel.sh@20 -- # IFS=: 00:06:46.829 14:01:49 -- accel/accel.sh@20 -- # read -r var val 00:06:46.829 14:01:49 -- accel/accel.sh@21 -- # val=xor 00:06:46.829 14:01:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.829 14:01:49 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:46.829 14:01:49 -- accel/accel.sh@20 -- # IFS=: 00:06:46.829 14:01:49 -- accel/accel.sh@20 -- # read -r var val 00:06:46.829 14:01:49 -- accel/accel.sh@21 -- # val=2 00:06:46.829 14:01:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.829 14:01:49 -- accel/accel.sh@20 -- # IFS=: 00:06:46.829 14:01:49 -- accel/accel.sh@20 -- # read -r var val 00:06:46.829 14:01:49 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:46.829 14:01:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.829 14:01:49 -- accel/accel.sh@20 -- # IFS=: 00:06:46.829 14:01:49 -- accel/accel.sh@20 -- # read -r var val 00:06:46.829 14:01:49 -- accel/accel.sh@21 -- # val= 00:06:46.829 14:01:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.829 14:01:49 -- accel/accel.sh@20 -- # IFS=: 00:06:46.829 14:01:49 -- accel/accel.sh@20 -- # read -r var val 00:06:46.829 14:01:49 -- accel/accel.sh@21 -- # val=software 00:06:46.829 14:01:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.829 14:01:49 -- accel/accel.sh@23 -- # accel_module=software 00:06:46.829 14:01:49 -- accel/accel.sh@20 -- # IFS=: 00:06:46.829 14:01:49 -- accel/accel.sh@20 -- # read -r var val 00:06:46.829 14:01:49 -- accel/accel.sh@21 -- # val=32 00:06:46.829 14:01:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.829 14:01:49 -- accel/accel.sh@20 -- # IFS=: 00:06:46.829 14:01:49 -- accel/accel.sh@20 -- # read -r var val 00:06:46.829 14:01:49 -- accel/accel.sh@21 -- # val=32 00:06:46.829 14:01:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.829 14:01:49 -- accel/accel.sh@20 -- # IFS=: 00:06:46.829 14:01:49 -- accel/accel.sh@20 -- # read -r var val 00:06:46.829 14:01:49 -- accel/accel.sh@21 -- # val=1 00:06:46.829 14:01:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.829 14:01:49 -- accel/accel.sh@20 -- # IFS=: 00:06:46.829 14:01:49 -- accel/accel.sh@20 -- # read -r var val 00:06:46.829 14:01:49 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:46.829 14:01:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.829 14:01:49 -- accel/accel.sh@20 -- # IFS=: 00:06:46.829 14:01:49 -- accel/accel.sh@20 -- # read -r var val 00:06:46.829 14:01:49 -- accel/accel.sh@21 -- # val=Yes 00:06:46.829 14:01:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.829 14:01:49 -- accel/accel.sh@20 -- # IFS=: 00:06:46.829 14:01:49 -- accel/accel.sh@20 -- # read -r var val 00:06:46.829 14:01:49 -- accel/accel.sh@21 -- # val= 00:06:46.829 14:01:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.829 14:01:49 -- accel/accel.sh@20 -- # IFS=: 00:06:46.829 14:01:49 -- accel/accel.sh@20 -- # read -r var val 00:06:46.829 14:01:49 -- accel/accel.sh@21 -- # val= 00:06:46.829 14:01:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.829 14:01:49 -- accel/accel.sh@20 -- # IFS=: 00:06:46.829 14:01:49 -- accel/accel.sh@20 -- # read -r var val 00:06:48.205 14:01:51 -- accel/accel.sh@21 -- # val= 00:06:48.205 14:01:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.205 14:01:51 -- accel/accel.sh@20 -- # IFS=: 00:06:48.205 14:01:51 -- accel/accel.sh@20 -- # read -r var val 00:06:48.205 14:01:51 -- accel/accel.sh@21 -- # val= 00:06:48.205 14:01:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.205 14:01:51 -- accel/accel.sh@20 -- # IFS=: 00:06:48.205 14:01:51 -- accel/accel.sh@20 -- # read -r var val 00:06:48.205 14:01:51 -- accel/accel.sh@21 -- # val= 00:06:48.205 14:01:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.205 14:01:51 -- accel/accel.sh@20 -- # IFS=: 00:06:48.205 14:01:51 -- accel/accel.sh@20 -- # read -r var val 00:06:48.205 14:01:51 -- accel/accel.sh@21 -- # val= 00:06:48.205 14:01:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.205 14:01:51 -- accel/accel.sh@20 -- # IFS=: 00:06:48.205 14:01:51 -- accel/accel.sh@20 -- # read -r var val 00:06:48.205 14:01:51 -- accel/accel.sh@21 -- # val= 00:06:48.205 14:01:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.205 14:01:51 -- accel/accel.sh@20 -- # IFS=: 00:06:48.205 14:01:51 -- accel/accel.sh@20 -- # read -r var val 00:06:48.205 14:01:51 -- accel/accel.sh@21 -- # val= 00:06:48.205 14:01:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.205 14:01:51 -- accel/accel.sh@20 -- # IFS=: 00:06:48.205 14:01:51 -- accel/accel.sh@20 -- # read -r var val 00:06:48.205 14:01:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:48.205 14:01:51 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:48.205 14:01:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:48.205 00:06:48.205 real 0m4.160s 00:06:48.205 user 0m3.698s 00:06:48.205 sys 0m0.252s 00:06:48.205 14:01:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:48.205 ************************************ 00:06:48.205 END TEST accel_xor 00:06:48.205 ************************************ 00:06:48.205 14:01:51 -- common/autotest_common.sh@10 -- # set +x 00:06:48.205 14:01:51 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:48.205 14:01:51 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:48.205 14:01:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:48.205 14:01:51 -- common/autotest_common.sh@10 -- # set +x 00:06:48.205 ************************************ 00:06:48.205 START TEST accel_xor 00:06:48.205 ************************************ 00:06:48.205 14:01:51 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:06:48.205 14:01:51 -- accel/accel.sh@16 -- # local accel_opc 00:06:48.205 14:01:51 -- accel/accel.sh@17 -- # local accel_module 00:06:48.205 14:01:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:06:48.205 14:01:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:48.205 14:01:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:48.205 14:01:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:48.205 14:01:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.205 14:01:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.205 14:01:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:48.205 14:01:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:48.205 14:01:51 -- accel/accel.sh@41 -- # local IFS=, 00:06:48.205 14:01:51 -- accel/accel.sh@42 -- # jq -r . 00:06:48.205 [2024-12-08 14:01:51.110304] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:48.205 [2024-12-08 14:01:51.110449] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59130 ] 00:06:48.464 [2024-12-08 14:01:51.259347] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.721 [2024-12-08 14:01:51.414379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.095 14:01:53 -- accel/accel.sh@18 -- # out=' 00:06:50.095 SPDK Configuration: 00:06:50.095 Core mask: 0x1 00:06:50.095 00:06:50.095 Accel Perf Configuration: 00:06:50.095 Workload Type: xor 00:06:50.095 Source buffers: 3 00:06:50.095 Transfer size: 4096 bytes 00:06:50.095 Vector count 1 00:06:50.095 Module: software 00:06:50.095 Queue depth: 32 00:06:50.095 Allocate depth: 32 00:06:50.095 # threads/core: 1 00:06:50.095 Run time: 1 seconds 00:06:50.095 Verify: Yes 00:06:50.095 00:06:50.095 Running for 1 seconds... 00:06:50.095 00:06:50.095 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:50.095 ------------------------------------------------------------------------------------ 00:06:50.095 0,0 425344/s 1661 MiB/s 0 0 00:06:50.095 ==================================================================================== 00:06:50.095 Total 425344/s 1661 MiB/s 0 0' 00:06:50.095 14:01:53 -- accel/accel.sh@20 -- # IFS=: 00:06:50.095 14:01:53 -- accel/accel.sh@20 -- # read -r var val 00:06:50.095 14:01:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:50.095 14:01:53 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.095 14:01:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:50.095 14:01:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:50.095 14:01:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.095 14:01:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.095 14:01:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:50.095 14:01:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:50.095 14:01:53 -- accel/accel.sh@41 -- # local IFS=, 00:06:50.095 14:01:53 -- accel/accel.sh@42 -- # jq -r . 00:06:50.356 [2024-12-08 14:01:53.038554] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:50.356 [2024-12-08 14:01:53.038660] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59156 ] 00:06:50.356 [2024-12-08 14:01:53.189878] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.617 [2024-12-08 14:01:53.425743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.877 14:01:53 -- accel/accel.sh@21 -- # val= 00:06:50.877 14:01:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.877 14:01:53 -- accel/accel.sh@20 -- # IFS=: 00:06:50.877 14:01:53 -- accel/accel.sh@20 -- # read -r var val 00:06:50.877 14:01:53 -- accel/accel.sh@21 -- # val= 00:06:50.877 14:01:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.877 14:01:53 -- accel/accel.sh@20 -- # IFS=: 00:06:50.877 14:01:53 -- accel/accel.sh@20 -- # read -r var val 00:06:50.877 14:01:53 -- accel/accel.sh@21 -- # val=0x1 00:06:50.877 14:01:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.877 14:01:53 -- accel/accel.sh@20 -- # IFS=: 00:06:50.877 14:01:53 -- accel/accel.sh@20 -- # read -r var val 00:06:50.877 14:01:53 -- accel/accel.sh@21 -- # val= 00:06:50.877 14:01:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.877 14:01:53 -- accel/accel.sh@20 -- # IFS=: 00:06:50.877 14:01:53 -- accel/accel.sh@20 -- # read -r var val 00:06:50.877 14:01:53 -- accel/accel.sh@21 -- # val= 00:06:50.877 14:01:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.877 14:01:53 -- accel/accel.sh@20 -- # IFS=: 00:06:50.877 14:01:53 -- accel/accel.sh@20 -- # read -r var val 00:06:50.877 14:01:53 -- accel/accel.sh@21 -- # val=xor 00:06:50.877 14:01:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.877 14:01:53 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:50.877 14:01:53 -- accel/accel.sh@20 -- # IFS=: 00:06:50.877 14:01:53 -- accel/accel.sh@20 -- # read -r var val 00:06:50.877 14:01:53 -- accel/accel.sh@21 -- # val=3 00:06:50.877 14:01:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.877 14:01:53 -- accel/accel.sh@20 -- # IFS=: 00:06:50.877 14:01:53 -- accel/accel.sh@20 -- # read -r var val 00:06:50.877 14:01:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:50.877 14:01:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.877 14:01:53 -- accel/accel.sh@20 -- # IFS=: 00:06:50.877 14:01:53 -- accel/accel.sh@20 -- # read -r var val 00:06:50.877 14:01:53 -- accel/accel.sh@21 -- # val= 00:06:50.877 14:01:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.877 14:01:53 -- accel/accel.sh@20 -- # IFS=: 00:06:50.877 14:01:53 -- accel/accel.sh@20 -- # read -r var val 00:06:50.877 14:01:53 -- accel/accel.sh@21 -- # val=software 00:06:50.877 14:01:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.877 14:01:53 -- accel/accel.sh@23 -- # accel_module=software 00:06:50.877 14:01:53 -- accel/accel.sh@20 -- # IFS=: 00:06:50.877 14:01:53 -- accel/accel.sh@20 -- # read -r var val 00:06:50.877 14:01:53 -- accel/accel.sh@21 -- # val=32 00:06:50.877 14:01:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.877 14:01:53 -- accel/accel.sh@20 -- # IFS=: 00:06:50.877 14:01:53 -- accel/accel.sh@20 -- # read -r var val 00:06:50.877 14:01:53 -- accel/accel.sh@21 -- # val=32 00:06:50.877 14:01:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.877 14:01:53 -- accel/accel.sh@20 -- # IFS=: 00:06:50.877 14:01:53 -- accel/accel.sh@20 -- # read -r var val 00:06:50.877 14:01:53 -- accel/accel.sh@21 -- # val=1 00:06:50.877 14:01:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.877 14:01:53 -- accel/accel.sh@20 -- # IFS=: 00:06:50.877 14:01:53 -- accel/accel.sh@20 -- # read -r var val 00:06:50.877 14:01:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:50.877 14:01:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.877 14:01:53 -- accel/accel.sh@20 -- # IFS=: 00:06:50.877 14:01:53 -- accel/accel.sh@20 -- # read -r var val 00:06:50.877 14:01:53 -- accel/accel.sh@21 -- # val=Yes 00:06:50.877 14:01:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.877 14:01:53 -- accel/accel.sh@20 -- # IFS=: 00:06:50.877 14:01:53 -- accel/accel.sh@20 -- # read -r var val 00:06:50.877 14:01:53 -- accel/accel.sh@21 -- # val= 00:06:50.877 14:01:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.877 14:01:53 -- accel/accel.sh@20 -- # IFS=: 00:06:50.877 14:01:53 -- accel/accel.sh@20 -- # read -r var val 00:06:50.877 14:01:53 -- accel/accel.sh@21 -- # val= 00:06:50.877 14:01:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.877 14:01:53 -- accel/accel.sh@20 -- # IFS=: 00:06:50.877 14:01:53 -- accel/accel.sh@20 -- # read -r var val 00:06:52.788 14:01:55 -- accel/accel.sh@21 -- # val= 00:06:52.788 14:01:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.788 14:01:55 -- accel/accel.sh@20 -- # IFS=: 00:06:52.788 14:01:55 -- accel/accel.sh@20 -- # read -r var val 00:06:52.788 14:01:55 -- accel/accel.sh@21 -- # val= 00:06:52.788 14:01:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.788 14:01:55 -- accel/accel.sh@20 -- # IFS=: 00:06:52.788 14:01:55 -- accel/accel.sh@20 -- # read -r var val 00:06:52.788 14:01:55 -- accel/accel.sh@21 -- # val= 00:06:52.788 14:01:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.788 14:01:55 -- accel/accel.sh@20 -- # IFS=: 00:06:52.788 14:01:55 -- accel/accel.sh@20 -- # read -r var val 00:06:52.788 14:01:55 -- accel/accel.sh@21 -- # val= 00:06:52.788 14:01:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.788 14:01:55 -- accel/accel.sh@20 -- # IFS=: 00:06:52.788 14:01:55 -- accel/accel.sh@20 -- # read -r var val 00:06:52.788 14:01:55 -- accel/accel.sh@21 -- # val= 00:06:52.788 14:01:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.788 14:01:55 -- accel/accel.sh@20 -- # IFS=: 00:06:52.788 14:01:55 -- accel/accel.sh@20 -- # read -r var val 00:06:52.788 14:01:55 -- accel/accel.sh@21 -- # val= 00:06:52.788 14:01:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.788 14:01:55 -- accel/accel.sh@20 -- # IFS=: 00:06:52.788 14:01:55 -- accel/accel.sh@20 -- # read -r var val 00:06:52.788 14:01:55 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:52.788 14:01:55 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:52.788 ************************************ 00:06:52.788 14:01:55 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:52.788 00:06:52.788 real 0m4.132s 00:06:52.788 user 0m3.635s 00:06:52.788 sys 0m0.285s 00:06:52.788 14:01:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:52.788 14:01:55 -- common/autotest_common.sh@10 -- # set +x 00:06:52.788 END TEST accel_xor 00:06:52.788 ************************************ 00:06:52.788 14:01:55 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:52.788 14:01:55 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:52.788 14:01:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:52.788 14:01:55 -- common/autotest_common.sh@10 -- # set +x 00:06:52.788 ************************************ 00:06:52.788 START TEST accel_dif_verify 00:06:52.788 ************************************ 00:06:52.788 14:01:55 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:06:52.788 14:01:55 -- accel/accel.sh@16 -- # local accel_opc 00:06:52.788 14:01:55 -- accel/accel.sh@17 -- # local accel_module 00:06:52.788 14:01:55 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:06:52.788 14:01:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:52.788 14:01:55 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.788 14:01:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.788 14:01:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.788 14:01:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.788 14:01:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.788 14:01:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.788 14:01:55 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.788 14:01:55 -- accel/accel.sh@42 -- # jq -r . 00:06:52.788 [2024-12-08 14:01:55.297597] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:52.788 [2024-12-08 14:01:55.297698] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59199 ] 00:06:52.788 [2024-12-08 14:01:55.445709] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.788 [2024-12-08 14:01:55.617858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.682 14:01:57 -- accel/accel.sh@18 -- # out=' 00:06:54.682 SPDK Configuration: 00:06:54.682 Core mask: 0x1 00:06:54.682 00:06:54.682 Accel Perf Configuration: 00:06:54.682 Workload Type: dif_verify 00:06:54.682 Vector size: 4096 bytes 00:06:54.682 Transfer size: 4096 bytes 00:06:54.682 Block size: 512 bytes 00:06:54.682 Metadata size: 8 bytes 00:06:54.682 Vector count 1 00:06:54.682 Module: software 00:06:54.682 Queue depth: 32 00:06:54.682 Allocate depth: 32 00:06:54.682 # threads/core: 1 00:06:54.682 Run time: 1 seconds 00:06:54.682 Verify: No 00:06:54.682 00:06:54.682 Running for 1 seconds... 00:06:54.682 00:06:54.682 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:54.682 ------------------------------------------------------------------------------------ 00:06:54.682 0,0 98240/s 389 MiB/s 0 0 00:06:54.682 ==================================================================================== 00:06:54.682 Total 98240/s 383 MiB/s 0 0' 00:06:54.682 14:01:57 -- accel/accel.sh@20 -- # IFS=: 00:06:54.682 14:01:57 -- accel/accel.sh@20 -- # read -r var val 00:06:54.682 14:01:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:54.682 14:01:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:54.682 14:01:57 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.682 14:01:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:54.682 14:01:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.682 14:01:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.682 14:01:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:54.682 14:01:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:54.682 14:01:57 -- accel/accel.sh@41 -- # local IFS=, 00:06:54.682 14:01:57 -- accel/accel.sh@42 -- # jq -r . 00:06:54.682 [2024-12-08 14:01:57.296126] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:54.682 [2024-12-08 14:01:57.296233] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59225 ] 00:06:54.682 [2024-12-08 14:01:57.443356] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.682 [2024-12-08 14:01:57.583640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.940 14:01:57 -- accel/accel.sh@21 -- # val= 00:06:54.940 14:01:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.940 14:01:57 -- accel/accel.sh@20 -- # IFS=: 00:06:54.940 14:01:57 -- accel/accel.sh@20 -- # read -r var val 00:06:54.940 14:01:57 -- accel/accel.sh@21 -- # val= 00:06:54.940 14:01:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.940 14:01:57 -- accel/accel.sh@20 -- # IFS=: 00:06:54.940 14:01:57 -- accel/accel.sh@20 -- # read -r var val 00:06:54.940 14:01:57 -- accel/accel.sh@21 -- # val=0x1 00:06:54.940 14:01:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.940 14:01:57 -- accel/accel.sh@20 -- # IFS=: 00:06:54.940 14:01:57 -- accel/accel.sh@20 -- # read -r var val 00:06:54.940 14:01:57 -- accel/accel.sh@21 -- # val= 00:06:54.940 14:01:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.940 14:01:57 -- accel/accel.sh@20 -- # IFS=: 00:06:54.940 14:01:57 -- accel/accel.sh@20 -- # read -r var val 00:06:54.940 14:01:57 -- accel/accel.sh@21 -- # val= 00:06:54.940 14:01:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.940 14:01:57 -- accel/accel.sh@20 -- # IFS=: 00:06:54.940 14:01:57 -- accel/accel.sh@20 -- # read -r var val 00:06:54.940 14:01:57 -- accel/accel.sh@21 -- # val=dif_verify 00:06:54.940 14:01:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.940 14:01:57 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:06:54.940 14:01:57 -- accel/accel.sh@20 -- # IFS=: 00:06:54.940 14:01:57 -- accel/accel.sh@20 -- # read -r var val 00:06:54.940 14:01:57 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:54.940 14:01:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.940 14:01:57 -- accel/accel.sh@20 -- # IFS=: 00:06:54.940 14:01:57 -- accel/accel.sh@20 -- # read -r var val 00:06:54.940 14:01:57 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:54.940 14:01:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.940 14:01:57 -- accel/accel.sh@20 -- # IFS=: 00:06:54.940 14:01:57 -- accel/accel.sh@20 -- # read -r var val 00:06:54.940 14:01:57 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:54.940 14:01:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.940 14:01:57 -- accel/accel.sh@20 -- # IFS=: 00:06:54.940 14:01:57 -- accel/accel.sh@20 -- # read -r var val 00:06:54.940 14:01:57 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:54.940 14:01:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.940 14:01:57 -- accel/accel.sh@20 -- # IFS=: 00:06:54.940 14:01:57 -- accel/accel.sh@20 -- # read -r var val 00:06:54.940 14:01:57 -- accel/accel.sh@21 -- # val= 00:06:54.940 14:01:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.940 14:01:57 -- accel/accel.sh@20 -- # IFS=: 00:06:54.940 14:01:57 -- accel/accel.sh@20 -- # read -r var val 00:06:54.940 14:01:57 -- accel/accel.sh@21 -- # val=software 00:06:54.940 14:01:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.940 14:01:57 -- accel/accel.sh@23 -- # accel_module=software 00:06:54.940 14:01:57 -- accel/accel.sh@20 -- # IFS=: 00:06:54.940 14:01:57 -- accel/accel.sh@20 -- # read -r var val 00:06:54.940 14:01:57 -- accel/accel.sh@21 -- # val=32 00:06:54.940 14:01:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.940 14:01:57 -- accel/accel.sh@20 -- # IFS=: 00:06:54.940 14:01:57 -- accel/accel.sh@20 -- # read -r var val 00:06:54.940 14:01:57 -- accel/accel.sh@21 -- # val=32 00:06:54.940 14:01:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.940 14:01:57 -- accel/accel.sh@20 -- # IFS=: 00:06:54.940 14:01:57 -- accel/accel.sh@20 -- # read -r var val 00:06:54.940 14:01:57 -- accel/accel.sh@21 -- # val=1 00:06:54.940 14:01:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.940 14:01:57 -- accel/accel.sh@20 -- # IFS=: 00:06:54.940 14:01:57 -- accel/accel.sh@20 -- # read -r var val 00:06:54.940 14:01:57 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:54.940 14:01:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.940 14:01:57 -- accel/accel.sh@20 -- # IFS=: 00:06:54.940 14:01:57 -- accel/accel.sh@20 -- # read -r var val 00:06:54.940 14:01:57 -- accel/accel.sh@21 -- # val=No 00:06:54.940 14:01:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.940 14:01:57 -- accel/accel.sh@20 -- # IFS=: 00:06:54.940 14:01:57 -- accel/accel.sh@20 -- # read -r var val 00:06:54.940 14:01:57 -- accel/accel.sh@21 -- # val= 00:06:54.940 14:01:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.940 14:01:57 -- accel/accel.sh@20 -- # IFS=: 00:06:54.940 14:01:57 -- accel/accel.sh@20 -- # read -r var val 00:06:54.940 14:01:57 -- accel/accel.sh@21 -- # val= 00:06:54.940 14:01:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.940 14:01:57 -- accel/accel.sh@20 -- # IFS=: 00:06:54.940 14:01:57 -- accel/accel.sh@20 -- # read -r var val 00:06:56.314 14:01:59 -- accel/accel.sh@21 -- # val= 00:06:56.314 14:01:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.314 14:01:59 -- accel/accel.sh@20 -- # IFS=: 00:06:56.314 14:01:59 -- accel/accel.sh@20 -- # read -r var val 00:06:56.315 14:01:59 -- accel/accel.sh@21 -- # val= 00:06:56.315 14:01:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.315 14:01:59 -- accel/accel.sh@20 -- # IFS=: 00:06:56.315 14:01:59 -- accel/accel.sh@20 -- # read -r var val 00:06:56.315 14:01:59 -- accel/accel.sh@21 -- # val= 00:06:56.315 14:01:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.315 14:01:59 -- accel/accel.sh@20 -- # IFS=: 00:06:56.315 14:01:59 -- accel/accel.sh@20 -- # read -r var val 00:06:56.315 14:01:59 -- accel/accel.sh@21 -- # val= 00:06:56.315 14:01:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.315 14:01:59 -- accel/accel.sh@20 -- # IFS=: 00:06:56.315 14:01:59 -- accel/accel.sh@20 -- # read -r var val 00:06:56.315 14:01:59 -- accel/accel.sh@21 -- # val= 00:06:56.315 14:01:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.315 14:01:59 -- accel/accel.sh@20 -- # IFS=: 00:06:56.315 14:01:59 -- accel/accel.sh@20 -- # read -r var val 00:06:56.315 14:01:59 -- accel/accel.sh@21 -- # val= 00:06:56.315 14:01:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.315 14:01:59 -- accel/accel.sh@20 -- # IFS=: 00:06:56.315 14:01:59 -- accel/accel.sh@20 -- # read -r var val 00:06:56.315 14:01:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:56.315 14:01:59 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:06:56.315 14:01:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.315 00:06:56.315 real 0m3.902s 00:06:56.315 user 0m3.444s 00:06:56.315 sys 0m0.250s 00:06:56.315 14:01:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:56.315 14:01:59 -- common/autotest_common.sh@10 -- # set +x 00:06:56.315 ************************************ 00:06:56.315 END TEST accel_dif_verify 00:06:56.315 ************************************ 00:06:56.315 14:01:59 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:56.315 14:01:59 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:56.315 14:01:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:56.315 14:01:59 -- common/autotest_common.sh@10 -- # set +x 00:06:56.315 ************************************ 00:06:56.315 START TEST accel_dif_generate 00:06:56.315 ************************************ 00:06:56.315 14:01:59 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:06:56.315 14:01:59 -- accel/accel.sh@16 -- # local accel_opc 00:06:56.315 14:01:59 -- accel/accel.sh@17 -- # local accel_module 00:06:56.315 14:01:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:06:56.315 14:01:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:56.315 14:01:59 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.315 14:01:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:56.315 14:01:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.315 14:01:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.315 14:01:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:56.315 14:01:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:56.315 14:01:59 -- accel/accel.sh@41 -- # local IFS=, 00:06:56.315 14:01:59 -- accel/accel.sh@42 -- # jq -r . 00:06:56.573 [2024-12-08 14:01:59.235716] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:56.573 [2024-12-08 14:01:59.235819] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59265 ] 00:06:56.573 [2024-12-08 14:01:59.382586] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.831 [2024-12-08 14:01:59.555798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.736 14:02:01 -- accel/accel.sh@18 -- # out=' 00:06:58.736 SPDK Configuration: 00:06:58.736 Core mask: 0x1 00:06:58.736 00:06:58.736 Accel Perf Configuration: 00:06:58.736 Workload Type: dif_generate 00:06:58.736 Vector size: 4096 bytes 00:06:58.736 Transfer size: 4096 bytes 00:06:58.736 Block size: 512 bytes 00:06:58.736 Metadata size: 8 bytes 00:06:58.736 Vector count 1 00:06:58.736 Module: software 00:06:58.736 Queue depth: 32 00:06:58.736 Allocate depth: 32 00:06:58.736 # threads/core: 1 00:06:58.736 Run time: 1 seconds 00:06:58.736 Verify: No 00:06:58.736 00:06:58.736 Running for 1 seconds... 00:06:58.736 00:06:58.736 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:58.736 ------------------------------------------------------------------------------------ 00:06:58.736 0,0 118304/s 469 MiB/s 0 0 00:06:58.736 ==================================================================================== 00:06:58.736 Total 118304/s 462 MiB/s 0 0' 00:06:58.736 14:02:01 -- accel/accel.sh@20 -- # IFS=: 00:06:58.736 14:02:01 -- accel/accel.sh@20 -- # read -r var val 00:06:58.736 14:02:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:58.736 14:02:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:58.736 14:02:01 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.736 14:02:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:58.736 14:02:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.736 14:02:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.736 14:02:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:58.736 14:02:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:58.736 14:02:01 -- accel/accel.sh@41 -- # local IFS=, 00:06:58.736 14:02:01 -- accel/accel.sh@42 -- # jq -r . 00:06:58.736 [2024-12-08 14:02:01.273331] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:58.736 [2024-12-08 14:02:01.273436] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59287 ] 00:06:58.736 [2024-12-08 14:02:01.429861] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.736 [2024-12-08 14:02:01.604201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.998 14:02:01 -- accel/accel.sh@21 -- # val= 00:06:58.998 14:02:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.998 14:02:01 -- accel/accel.sh@20 -- # IFS=: 00:06:58.998 14:02:01 -- accel/accel.sh@20 -- # read -r var val 00:06:58.998 14:02:01 -- accel/accel.sh@21 -- # val= 00:06:58.998 14:02:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.998 14:02:01 -- accel/accel.sh@20 -- # IFS=: 00:06:58.998 14:02:01 -- accel/accel.sh@20 -- # read -r var val 00:06:58.998 14:02:01 -- accel/accel.sh@21 -- # val=0x1 00:06:58.998 14:02:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.998 14:02:01 -- accel/accel.sh@20 -- # IFS=: 00:06:58.998 14:02:01 -- accel/accel.sh@20 -- # read -r var val 00:06:58.998 14:02:01 -- accel/accel.sh@21 -- # val= 00:06:58.998 14:02:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.998 14:02:01 -- accel/accel.sh@20 -- # IFS=: 00:06:58.998 14:02:01 -- accel/accel.sh@20 -- # read -r var val 00:06:58.998 14:02:01 -- accel/accel.sh@21 -- # val= 00:06:58.998 14:02:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.998 14:02:01 -- accel/accel.sh@20 -- # IFS=: 00:06:58.998 14:02:01 -- accel/accel.sh@20 -- # read -r var val 00:06:58.998 14:02:01 -- accel/accel.sh@21 -- # val=dif_generate 00:06:58.998 14:02:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.998 14:02:01 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:06:58.998 14:02:01 -- accel/accel.sh@20 -- # IFS=: 00:06:58.998 14:02:01 -- accel/accel.sh@20 -- # read -r var val 00:06:58.998 14:02:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:58.998 14:02:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.998 14:02:01 -- accel/accel.sh@20 -- # IFS=: 00:06:58.998 14:02:01 -- accel/accel.sh@20 -- # read -r var val 00:06:58.998 14:02:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:58.998 14:02:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.998 14:02:01 -- accel/accel.sh@20 -- # IFS=: 00:06:58.998 14:02:01 -- accel/accel.sh@20 -- # read -r var val 00:06:58.998 14:02:01 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:58.998 14:02:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.998 14:02:01 -- accel/accel.sh@20 -- # IFS=: 00:06:58.998 14:02:01 -- accel/accel.sh@20 -- # read -r var val 00:06:58.998 14:02:01 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:58.998 14:02:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.998 14:02:01 -- accel/accel.sh@20 -- # IFS=: 00:06:58.998 14:02:01 -- accel/accel.sh@20 -- # read -r var val 00:06:58.998 14:02:01 -- accel/accel.sh@21 -- # val= 00:06:58.998 14:02:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.998 14:02:01 -- accel/accel.sh@20 -- # IFS=: 00:06:58.998 14:02:01 -- accel/accel.sh@20 -- # read -r var val 00:06:58.998 14:02:01 -- accel/accel.sh@21 -- # val=software 00:06:58.998 14:02:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.998 14:02:01 -- accel/accel.sh@23 -- # accel_module=software 00:06:58.998 14:02:01 -- accel/accel.sh@20 -- # IFS=: 00:06:58.998 14:02:01 -- accel/accel.sh@20 -- # read -r var val 00:06:58.998 14:02:01 -- accel/accel.sh@21 -- # val=32 00:06:58.998 14:02:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.998 14:02:01 -- accel/accel.sh@20 -- # IFS=: 00:06:58.998 14:02:01 -- accel/accel.sh@20 -- # read -r var val 00:06:58.998 14:02:01 -- accel/accel.sh@21 -- # val=32 00:06:58.998 14:02:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.998 14:02:01 -- accel/accel.sh@20 -- # IFS=: 00:06:58.998 14:02:01 -- accel/accel.sh@20 -- # read -r var val 00:06:58.998 14:02:01 -- accel/accel.sh@21 -- # val=1 00:06:58.998 14:02:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.998 14:02:01 -- accel/accel.sh@20 -- # IFS=: 00:06:58.998 14:02:01 -- accel/accel.sh@20 -- # read -r var val 00:06:58.998 14:02:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:58.998 14:02:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.998 14:02:01 -- accel/accel.sh@20 -- # IFS=: 00:06:58.998 14:02:01 -- accel/accel.sh@20 -- # read -r var val 00:06:58.998 14:02:01 -- accel/accel.sh@21 -- # val=No 00:06:58.998 14:02:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.998 14:02:01 -- accel/accel.sh@20 -- # IFS=: 00:06:58.998 14:02:01 -- accel/accel.sh@20 -- # read -r var val 00:06:58.998 14:02:01 -- accel/accel.sh@21 -- # val= 00:06:58.998 14:02:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.998 14:02:01 -- accel/accel.sh@20 -- # IFS=: 00:06:58.998 14:02:01 -- accel/accel.sh@20 -- # read -r var val 00:06:58.998 14:02:01 -- accel/accel.sh@21 -- # val= 00:06:58.998 14:02:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.998 14:02:01 -- accel/accel.sh@20 -- # IFS=: 00:06:58.998 14:02:01 -- accel/accel.sh@20 -- # read -r var val 00:07:00.898 14:02:03 -- accel/accel.sh@21 -- # val= 00:07:00.898 14:02:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.898 14:02:03 -- accel/accel.sh@20 -- # IFS=: 00:07:00.898 14:02:03 -- accel/accel.sh@20 -- # read -r var val 00:07:00.898 14:02:03 -- accel/accel.sh@21 -- # val= 00:07:00.898 14:02:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.898 14:02:03 -- accel/accel.sh@20 -- # IFS=: 00:07:00.898 14:02:03 -- accel/accel.sh@20 -- # read -r var val 00:07:00.898 14:02:03 -- accel/accel.sh@21 -- # val= 00:07:00.899 14:02:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.899 14:02:03 -- accel/accel.sh@20 -- # IFS=: 00:07:00.899 14:02:03 -- accel/accel.sh@20 -- # read -r var val 00:07:00.899 14:02:03 -- accel/accel.sh@21 -- # val= 00:07:00.899 14:02:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.899 14:02:03 -- accel/accel.sh@20 -- # IFS=: 00:07:00.899 14:02:03 -- accel/accel.sh@20 -- # read -r var val 00:07:00.899 14:02:03 -- accel/accel.sh@21 -- # val= 00:07:00.899 14:02:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.899 14:02:03 -- accel/accel.sh@20 -- # IFS=: 00:07:00.899 14:02:03 -- accel/accel.sh@20 -- # read -r var val 00:07:00.899 14:02:03 -- accel/accel.sh@21 -- # val= 00:07:00.899 14:02:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.899 14:02:03 -- accel/accel.sh@20 -- # IFS=: 00:07:00.899 14:02:03 -- accel/accel.sh@20 -- # read -r var val 00:07:00.899 14:02:03 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:00.899 14:02:03 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:00.899 14:02:03 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.899 00:07:00.899 real 0m4.117s 00:07:00.899 user 0m3.663s 00:07:00.899 sys 0m0.239s 00:07:00.899 14:02:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:00.899 14:02:03 -- common/autotest_common.sh@10 -- # set +x 00:07:00.899 ************************************ 00:07:00.899 END TEST accel_dif_generate 00:07:00.899 ************************************ 00:07:00.899 14:02:03 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:00.899 14:02:03 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:00.899 14:02:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:00.899 14:02:03 -- common/autotest_common.sh@10 -- # set +x 00:07:00.899 ************************************ 00:07:00.899 START TEST accel_dif_generate_copy 00:07:00.899 ************************************ 00:07:00.899 14:02:03 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:07:00.899 14:02:03 -- accel/accel.sh@16 -- # local accel_opc 00:07:00.899 14:02:03 -- accel/accel.sh@17 -- # local accel_module 00:07:00.899 14:02:03 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:00.899 14:02:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:00.899 14:02:03 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.899 14:02:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:00.899 14:02:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.899 14:02:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.899 14:02:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:00.899 14:02:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:00.899 14:02:03 -- accel/accel.sh@41 -- # local IFS=, 00:07:00.899 14:02:03 -- accel/accel.sh@42 -- # jq -r . 00:07:00.899 [2024-12-08 14:02:03.390928] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:00.899 [2024-12-08 14:02:03.391043] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59332 ] 00:07:00.899 [2024-12-08 14:02:03.537174] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.899 [2024-12-08 14:02:03.685910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.796 14:02:05 -- accel/accel.sh@18 -- # out=' 00:07:02.796 SPDK Configuration: 00:07:02.796 Core mask: 0x1 00:07:02.796 00:07:02.796 Accel Perf Configuration: 00:07:02.796 Workload Type: dif_generate_copy 00:07:02.796 Vector size: 4096 bytes 00:07:02.796 Transfer size: 4096 bytes 00:07:02.796 Vector count 1 00:07:02.796 Module: software 00:07:02.796 Queue depth: 32 00:07:02.796 Allocate depth: 32 00:07:02.796 # threads/core: 1 00:07:02.796 Run time: 1 seconds 00:07:02.796 Verify: No 00:07:02.796 00:07:02.796 Running for 1 seconds... 00:07:02.796 00:07:02.796 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:02.796 ------------------------------------------------------------------------------------ 00:07:02.796 0,0 113248/s 449 MiB/s 0 0 00:07:02.796 ==================================================================================== 00:07:02.796 Total 113248/s 442 MiB/s 0 0' 00:07:02.796 14:02:05 -- accel/accel.sh@20 -- # IFS=: 00:07:02.796 14:02:05 -- accel/accel.sh@20 -- # read -r var val 00:07:02.796 14:02:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:02.796 14:02:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:02.796 14:02:05 -- accel/accel.sh@12 -- # build_accel_config 00:07:02.796 14:02:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:02.796 14:02:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.796 14:02:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.796 14:02:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:02.796 14:02:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:02.796 14:02:05 -- accel/accel.sh@41 -- # local IFS=, 00:07:02.796 14:02:05 -- accel/accel.sh@42 -- # jq -r . 00:07:02.796 [2024-12-08 14:02:05.325027] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:02.796 [2024-12-08 14:02:05.325134] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59354 ] 00:07:02.796 [2024-12-08 14:02:05.470203] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.796 [2024-12-08 14:02:05.620287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.054 14:02:05 -- accel/accel.sh@21 -- # val= 00:07:03.054 14:02:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.054 14:02:05 -- accel/accel.sh@20 -- # IFS=: 00:07:03.054 14:02:05 -- accel/accel.sh@20 -- # read -r var val 00:07:03.054 14:02:05 -- accel/accel.sh@21 -- # val= 00:07:03.054 14:02:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.054 14:02:05 -- accel/accel.sh@20 -- # IFS=: 00:07:03.054 14:02:05 -- accel/accel.sh@20 -- # read -r var val 00:07:03.054 14:02:05 -- accel/accel.sh@21 -- # val=0x1 00:07:03.054 14:02:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.054 14:02:05 -- accel/accel.sh@20 -- # IFS=: 00:07:03.054 14:02:05 -- accel/accel.sh@20 -- # read -r var val 00:07:03.054 14:02:05 -- accel/accel.sh@21 -- # val= 00:07:03.054 14:02:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.054 14:02:05 -- accel/accel.sh@20 -- # IFS=: 00:07:03.054 14:02:05 -- accel/accel.sh@20 -- # read -r var val 00:07:03.054 14:02:05 -- accel/accel.sh@21 -- # val= 00:07:03.054 14:02:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.054 14:02:05 -- accel/accel.sh@20 -- # IFS=: 00:07:03.054 14:02:05 -- accel/accel.sh@20 -- # read -r var val 00:07:03.054 14:02:05 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:03.054 14:02:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.054 14:02:05 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:03.054 14:02:05 -- accel/accel.sh@20 -- # IFS=: 00:07:03.054 14:02:05 -- accel/accel.sh@20 -- # read -r var val 00:07:03.054 14:02:05 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:03.054 14:02:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.054 14:02:05 -- accel/accel.sh@20 -- # IFS=: 00:07:03.054 14:02:05 -- accel/accel.sh@20 -- # read -r var val 00:07:03.054 14:02:05 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:03.054 14:02:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.054 14:02:05 -- accel/accel.sh@20 -- # IFS=: 00:07:03.054 14:02:05 -- accel/accel.sh@20 -- # read -r var val 00:07:03.054 14:02:05 -- accel/accel.sh@21 -- # val= 00:07:03.054 14:02:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.054 14:02:05 -- accel/accel.sh@20 -- # IFS=: 00:07:03.054 14:02:05 -- accel/accel.sh@20 -- # read -r var val 00:07:03.054 14:02:05 -- accel/accel.sh@21 -- # val=software 00:07:03.054 14:02:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.054 14:02:05 -- accel/accel.sh@23 -- # accel_module=software 00:07:03.054 14:02:05 -- accel/accel.sh@20 -- # IFS=: 00:07:03.054 14:02:05 -- accel/accel.sh@20 -- # read -r var val 00:07:03.054 14:02:05 -- accel/accel.sh@21 -- # val=32 00:07:03.054 14:02:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.054 14:02:05 -- accel/accel.sh@20 -- # IFS=: 00:07:03.054 14:02:05 -- accel/accel.sh@20 -- # read -r var val 00:07:03.054 14:02:05 -- accel/accel.sh@21 -- # val=32 00:07:03.054 14:02:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.054 14:02:05 -- accel/accel.sh@20 -- # IFS=: 00:07:03.054 14:02:05 -- accel/accel.sh@20 -- # read -r var val 00:07:03.054 14:02:05 -- accel/accel.sh@21 -- # val=1 00:07:03.054 14:02:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.054 14:02:05 -- accel/accel.sh@20 -- # IFS=: 00:07:03.054 14:02:05 -- accel/accel.sh@20 -- # read -r var val 00:07:03.054 14:02:05 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:03.054 14:02:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.054 14:02:05 -- accel/accel.sh@20 -- # IFS=: 00:07:03.054 14:02:05 -- accel/accel.sh@20 -- # read -r var val 00:07:03.054 14:02:05 -- accel/accel.sh@21 -- # val=No 00:07:03.054 14:02:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.054 14:02:05 -- accel/accel.sh@20 -- # IFS=: 00:07:03.054 14:02:05 -- accel/accel.sh@20 -- # read -r var val 00:07:03.054 14:02:05 -- accel/accel.sh@21 -- # val= 00:07:03.054 14:02:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.054 14:02:05 -- accel/accel.sh@20 -- # IFS=: 00:07:03.054 14:02:05 -- accel/accel.sh@20 -- # read -r var val 00:07:03.054 14:02:05 -- accel/accel.sh@21 -- # val= 00:07:03.054 14:02:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.054 14:02:05 -- accel/accel.sh@20 -- # IFS=: 00:07:03.054 14:02:05 -- accel/accel.sh@20 -- # read -r var val 00:07:04.429 14:02:07 -- accel/accel.sh@21 -- # val= 00:07:04.429 14:02:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.429 14:02:07 -- accel/accel.sh@20 -- # IFS=: 00:07:04.429 14:02:07 -- accel/accel.sh@20 -- # read -r var val 00:07:04.429 14:02:07 -- accel/accel.sh@21 -- # val= 00:07:04.429 14:02:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.429 14:02:07 -- accel/accel.sh@20 -- # IFS=: 00:07:04.429 14:02:07 -- accel/accel.sh@20 -- # read -r var val 00:07:04.429 14:02:07 -- accel/accel.sh@21 -- # val= 00:07:04.429 14:02:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.429 14:02:07 -- accel/accel.sh@20 -- # IFS=: 00:07:04.429 14:02:07 -- accel/accel.sh@20 -- # read -r var val 00:07:04.429 14:02:07 -- accel/accel.sh@21 -- # val= 00:07:04.429 14:02:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.429 14:02:07 -- accel/accel.sh@20 -- # IFS=: 00:07:04.429 14:02:07 -- accel/accel.sh@20 -- # read -r var val 00:07:04.429 14:02:07 -- accel/accel.sh@21 -- # val= 00:07:04.429 14:02:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.429 14:02:07 -- accel/accel.sh@20 -- # IFS=: 00:07:04.429 14:02:07 -- accel/accel.sh@20 -- # read -r var val 00:07:04.429 14:02:07 -- accel/accel.sh@21 -- # val= 00:07:04.429 14:02:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.429 14:02:07 -- accel/accel.sh@20 -- # IFS=: 00:07:04.429 14:02:07 -- accel/accel.sh@20 -- # read -r var val 00:07:04.429 14:02:07 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:04.429 14:02:07 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:04.429 14:02:07 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.429 00:07:04.429 real 0m3.860s 00:07:04.429 user 0m3.411s 00:07:04.429 sys 0m0.236s 00:07:04.429 14:02:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:04.429 14:02:07 -- common/autotest_common.sh@10 -- # set +x 00:07:04.429 ************************************ 00:07:04.429 END TEST accel_dif_generate_copy 00:07:04.429 ************************************ 00:07:04.429 14:02:07 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:04.429 14:02:07 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:04.429 14:02:07 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:04.429 14:02:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:04.429 14:02:07 -- common/autotest_common.sh@10 -- # set +x 00:07:04.430 ************************************ 00:07:04.430 START TEST accel_comp 00:07:04.430 ************************************ 00:07:04.430 14:02:07 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:04.430 14:02:07 -- accel/accel.sh@16 -- # local accel_opc 00:07:04.430 14:02:07 -- accel/accel.sh@17 -- # local accel_module 00:07:04.430 14:02:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:04.430 14:02:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:04.430 14:02:07 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.430 14:02:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:04.430 14:02:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.430 14:02:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.430 14:02:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:04.430 14:02:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:04.430 14:02:07 -- accel/accel.sh@41 -- # local IFS=, 00:07:04.430 14:02:07 -- accel/accel.sh@42 -- # jq -r . 00:07:04.430 [2024-12-08 14:02:07.286508] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:04.430 [2024-12-08 14:02:07.286973] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59395 ] 00:07:04.688 [2024-12-08 14:02:07.435024] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.688 [2024-12-08 14:02:07.585587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.667 14:02:09 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:06.667 00:07:06.667 SPDK Configuration: 00:07:06.667 Core mask: 0x1 00:07:06.667 00:07:06.667 Accel Perf Configuration: 00:07:06.667 Workload Type: compress 00:07:06.667 Transfer size: 4096 bytes 00:07:06.667 Vector count 1 00:07:06.667 Module: software 00:07:06.667 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:06.667 Queue depth: 32 00:07:06.667 Allocate depth: 32 00:07:06.667 # threads/core: 1 00:07:06.667 Run time: 1 seconds 00:07:06.667 Verify: No 00:07:06.667 00:07:06.667 Running for 1 seconds... 00:07:06.667 00:07:06.667 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:06.667 ------------------------------------------------------------------------------------ 00:07:06.667 0,0 62272/s 259 MiB/s 0 0 00:07:06.667 ==================================================================================== 00:07:06.667 Total 62272/s 243 MiB/s 0 0' 00:07:06.667 14:02:09 -- accel/accel.sh@20 -- # IFS=: 00:07:06.667 14:02:09 -- accel/accel.sh@20 -- # read -r var val 00:07:06.667 14:02:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:06.667 14:02:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:06.667 14:02:09 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.667 14:02:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.667 14:02:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.667 14:02:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.667 14:02:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.667 14:02:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.667 14:02:09 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.667 14:02:09 -- accel/accel.sh@42 -- # jq -r . 00:07:06.667 [2024-12-08 14:02:09.227590] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:06.667 [2024-12-08 14:02:09.227691] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59416 ] 00:07:06.667 [2024-12-08 14:02:09.373446] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.667 [2024-12-08 14:02:09.518095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.925 14:02:09 -- accel/accel.sh@21 -- # val= 00:07:06.925 14:02:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.925 14:02:09 -- accel/accel.sh@20 -- # IFS=: 00:07:06.925 14:02:09 -- accel/accel.sh@20 -- # read -r var val 00:07:06.925 14:02:09 -- accel/accel.sh@21 -- # val= 00:07:06.925 14:02:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.925 14:02:09 -- accel/accel.sh@20 -- # IFS=: 00:07:06.925 14:02:09 -- accel/accel.sh@20 -- # read -r var val 00:07:06.925 14:02:09 -- accel/accel.sh@21 -- # val= 00:07:06.925 14:02:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.925 14:02:09 -- accel/accel.sh@20 -- # IFS=: 00:07:06.925 14:02:09 -- accel/accel.sh@20 -- # read -r var val 00:07:06.925 14:02:09 -- accel/accel.sh@21 -- # val=0x1 00:07:06.925 14:02:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.925 14:02:09 -- accel/accel.sh@20 -- # IFS=: 00:07:06.925 14:02:09 -- accel/accel.sh@20 -- # read -r var val 00:07:06.925 14:02:09 -- accel/accel.sh@21 -- # val= 00:07:06.925 14:02:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.925 14:02:09 -- accel/accel.sh@20 -- # IFS=: 00:07:06.925 14:02:09 -- accel/accel.sh@20 -- # read -r var val 00:07:06.925 14:02:09 -- accel/accel.sh@21 -- # val= 00:07:06.925 14:02:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.925 14:02:09 -- accel/accel.sh@20 -- # IFS=: 00:07:06.925 14:02:09 -- accel/accel.sh@20 -- # read -r var val 00:07:06.925 14:02:09 -- accel/accel.sh@21 -- # val=compress 00:07:06.925 14:02:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.925 14:02:09 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:06.925 14:02:09 -- accel/accel.sh@20 -- # IFS=: 00:07:06.925 14:02:09 -- accel/accel.sh@20 -- # read -r var val 00:07:06.925 14:02:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:06.925 14:02:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.925 14:02:09 -- accel/accel.sh@20 -- # IFS=: 00:07:06.925 14:02:09 -- accel/accel.sh@20 -- # read -r var val 00:07:06.925 14:02:09 -- accel/accel.sh@21 -- # val= 00:07:06.925 14:02:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.926 14:02:09 -- accel/accel.sh@20 -- # IFS=: 00:07:06.926 14:02:09 -- accel/accel.sh@20 -- # read -r var val 00:07:06.926 14:02:09 -- accel/accel.sh@21 -- # val=software 00:07:06.926 14:02:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.926 14:02:09 -- accel/accel.sh@23 -- # accel_module=software 00:07:06.926 14:02:09 -- accel/accel.sh@20 -- # IFS=: 00:07:06.926 14:02:09 -- accel/accel.sh@20 -- # read -r var val 00:07:06.926 14:02:09 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:06.926 14:02:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.926 14:02:09 -- accel/accel.sh@20 -- # IFS=: 00:07:06.926 14:02:09 -- accel/accel.sh@20 -- # read -r var val 00:07:06.926 14:02:09 -- accel/accel.sh@21 -- # val=32 00:07:06.926 14:02:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.926 14:02:09 -- accel/accel.sh@20 -- # IFS=: 00:07:06.926 14:02:09 -- accel/accel.sh@20 -- # read -r var val 00:07:06.926 14:02:09 -- accel/accel.sh@21 -- # val=32 00:07:06.926 14:02:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.926 14:02:09 -- accel/accel.sh@20 -- # IFS=: 00:07:06.926 14:02:09 -- accel/accel.sh@20 -- # read -r var val 00:07:06.926 14:02:09 -- accel/accel.sh@21 -- # val=1 00:07:06.926 14:02:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.926 14:02:09 -- accel/accel.sh@20 -- # IFS=: 00:07:06.926 14:02:09 -- accel/accel.sh@20 -- # read -r var val 00:07:06.926 14:02:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:06.926 14:02:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.926 14:02:09 -- accel/accel.sh@20 -- # IFS=: 00:07:06.926 14:02:09 -- accel/accel.sh@20 -- # read -r var val 00:07:06.926 14:02:09 -- accel/accel.sh@21 -- # val=No 00:07:06.926 14:02:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.926 14:02:09 -- accel/accel.sh@20 -- # IFS=: 00:07:06.926 14:02:09 -- accel/accel.sh@20 -- # read -r var val 00:07:06.926 14:02:09 -- accel/accel.sh@21 -- # val= 00:07:06.926 14:02:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.926 14:02:09 -- accel/accel.sh@20 -- # IFS=: 00:07:06.926 14:02:09 -- accel/accel.sh@20 -- # read -r var val 00:07:06.926 14:02:09 -- accel/accel.sh@21 -- # val= 00:07:06.926 14:02:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.926 14:02:09 -- accel/accel.sh@20 -- # IFS=: 00:07:06.926 14:02:09 -- accel/accel.sh@20 -- # read -r var val 00:07:08.310 14:02:11 -- accel/accel.sh@21 -- # val= 00:07:08.310 14:02:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.310 14:02:11 -- accel/accel.sh@20 -- # IFS=: 00:07:08.310 14:02:11 -- accel/accel.sh@20 -- # read -r var val 00:07:08.310 14:02:11 -- accel/accel.sh@21 -- # val= 00:07:08.310 14:02:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.310 14:02:11 -- accel/accel.sh@20 -- # IFS=: 00:07:08.310 14:02:11 -- accel/accel.sh@20 -- # read -r var val 00:07:08.310 14:02:11 -- accel/accel.sh@21 -- # val= 00:07:08.310 14:02:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.310 14:02:11 -- accel/accel.sh@20 -- # IFS=: 00:07:08.310 14:02:11 -- accel/accel.sh@20 -- # read -r var val 00:07:08.310 14:02:11 -- accel/accel.sh@21 -- # val= 00:07:08.310 14:02:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.310 14:02:11 -- accel/accel.sh@20 -- # IFS=: 00:07:08.310 14:02:11 -- accel/accel.sh@20 -- # read -r var val 00:07:08.310 14:02:11 -- accel/accel.sh@21 -- # val= 00:07:08.310 14:02:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.310 14:02:11 -- accel/accel.sh@20 -- # IFS=: 00:07:08.310 14:02:11 -- accel/accel.sh@20 -- # read -r var val 00:07:08.310 14:02:11 -- accel/accel.sh@21 -- # val= 00:07:08.310 14:02:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.310 14:02:11 -- accel/accel.sh@20 -- # IFS=: 00:07:08.310 14:02:11 -- accel/accel.sh@20 -- # read -r var val 00:07:08.310 14:02:11 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:08.310 14:02:11 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:08.310 14:02:11 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.310 00:07:08.310 real 0m3.864s 00:07:08.310 user 0m3.413s 00:07:08.310 sys 0m0.234s 00:07:08.310 14:02:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:08.310 ************************************ 00:07:08.310 END TEST accel_comp 00:07:08.310 ************************************ 00:07:08.310 14:02:11 -- common/autotest_common.sh@10 -- # set +x 00:07:08.310 14:02:11 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:08.310 14:02:11 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:08.310 14:02:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:08.310 14:02:11 -- common/autotest_common.sh@10 -- # set +x 00:07:08.310 ************************************ 00:07:08.310 START TEST accel_decomp 00:07:08.310 ************************************ 00:07:08.310 14:02:11 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:08.310 14:02:11 -- accel/accel.sh@16 -- # local accel_opc 00:07:08.310 14:02:11 -- accel/accel.sh@17 -- # local accel_module 00:07:08.310 14:02:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:08.310 14:02:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:08.310 14:02:11 -- accel/accel.sh@12 -- # build_accel_config 00:07:08.310 14:02:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:08.310 14:02:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.310 14:02:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.310 14:02:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:08.310 14:02:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:08.310 14:02:11 -- accel/accel.sh@41 -- # local IFS=, 00:07:08.310 14:02:11 -- accel/accel.sh@42 -- # jq -r . 00:07:08.310 [2024-12-08 14:02:11.198925] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:08.310 [2024-12-08 14:02:11.199208] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59457 ] 00:07:08.569 [2024-12-08 14:02:11.355006] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.826 [2024-12-08 14:02:11.495553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.202 14:02:13 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:10.202 00:07:10.202 SPDK Configuration: 00:07:10.202 Core mask: 0x1 00:07:10.202 00:07:10.202 Accel Perf Configuration: 00:07:10.202 Workload Type: decompress 00:07:10.202 Transfer size: 4096 bytes 00:07:10.202 Vector count 1 00:07:10.202 Module: software 00:07:10.202 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:10.202 Queue depth: 32 00:07:10.202 Allocate depth: 32 00:07:10.202 # threads/core: 1 00:07:10.202 Run time: 1 seconds 00:07:10.202 Verify: Yes 00:07:10.202 00:07:10.202 Running for 1 seconds... 00:07:10.203 00:07:10.203 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:10.203 ------------------------------------------------------------------------------------ 00:07:10.203 0,0 80832/s 148 MiB/s 0 0 00:07:10.203 ==================================================================================== 00:07:10.203 Total 80832/s 315 MiB/s 0 0' 00:07:10.203 14:02:13 -- accel/accel.sh@20 -- # IFS=: 00:07:10.203 14:02:13 -- accel/accel.sh@20 -- # read -r var val 00:07:10.203 14:02:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:10.203 14:02:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:10.203 14:02:13 -- accel/accel.sh@12 -- # build_accel_config 00:07:10.203 14:02:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:10.203 14:02:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.203 14:02:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.203 14:02:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:10.203 14:02:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:10.203 14:02:13 -- accel/accel.sh@41 -- # local IFS=, 00:07:10.203 14:02:13 -- accel/accel.sh@42 -- # jq -r . 00:07:10.461 [2024-12-08 14:02:13.125739] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:10.461 [2024-12-08 14:02:13.125844] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59483 ] 00:07:10.461 [2024-12-08 14:02:13.271979] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.719 [2024-12-08 14:02:13.413922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.719 14:02:13 -- accel/accel.sh@21 -- # val= 00:07:10.719 14:02:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.719 14:02:13 -- accel/accel.sh@20 -- # IFS=: 00:07:10.719 14:02:13 -- accel/accel.sh@20 -- # read -r var val 00:07:10.719 14:02:13 -- accel/accel.sh@21 -- # val= 00:07:10.719 14:02:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.719 14:02:13 -- accel/accel.sh@20 -- # IFS=: 00:07:10.719 14:02:13 -- accel/accel.sh@20 -- # read -r var val 00:07:10.719 14:02:13 -- accel/accel.sh@21 -- # val= 00:07:10.719 14:02:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.719 14:02:13 -- accel/accel.sh@20 -- # IFS=: 00:07:10.719 14:02:13 -- accel/accel.sh@20 -- # read -r var val 00:07:10.719 14:02:13 -- accel/accel.sh@21 -- # val=0x1 00:07:10.719 14:02:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.719 14:02:13 -- accel/accel.sh@20 -- # IFS=: 00:07:10.719 14:02:13 -- accel/accel.sh@20 -- # read -r var val 00:07:10.719 14:02:13 -- accel/accel.sh@21 -- # val= 00:07:10.719 14:02:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.719 14:02:13 -- accel/accel.sh@20 -- # IFS=: 00:07:10.719 14:02:13 -- accel/accel.sh@20 -- # read -r var val 00:07:10.719 14:02:13 -- accel/accel.sh@21 -- # val= 00:07:10.719 14:02:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.719 14:02:13 -- accel/accel.sh@20 -- # IFS=: 00:07:10.719 14:02:13 -- accel/accel.sh@20 -- # read -r var val 00:07:10.719 14:02:13 -- accel/accel.sh@21 -- # val=decompress 00:07:10.719 14:02:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.719 14:02:13 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:10.719 14:02:13 -- accel/accel.sh@20 -- # IFS=: 00:07:10.719 14:02:13 -- accel/accel.sh@20 -- # read -r var val 00:07:10.719 14:02:13 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:10.719 14:02:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.719 14:02:13 -- accel/accel.sh@20 -- # IFS=: 00:07:10.719 14:02:13 -- accel/accel.sh@20 -- # read -r var val 00:07:10.719 14:02:13 -- accel/accel.sh@21 -- # val= 00:07:10.719 14:02:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.719 14:02:13 -- accel/accel.sh@20 -- # IFS=: 00:07:10.719 14:02:13 -- accel/accel.sh@20 -- # read -r var val 00:07:10.719 14:02:13 -- accel/accel.sh@21 -- # val=software 00:07:10.719 14:02:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.719 14:02:13 -- accel/accel.sh@23 -- # accel_module=software 00:07:10.719 14:02:13 -- accel/accel.sh@20 -- # IFS=: 00:07:10.719 14:02:13 -- accel/accel.sh@20 -- # read -r var val 00:07:10.719 14:02:13 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:10.719 14:02:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.719 14:02:13 -- accel/accel.sh@20 -- # IFS=: 00:07:10.719 14:02:13 -- accel/accel.sh@20 -- # read -r var val 00:07:10.719 14:02:13 -- accel/accel.sh@21 -- # val=32 00:07:10.719 14:02:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.719 14:02:13 -- accel/accel.sh@20 -- # IFS=: 00:07:10.719 14:02:13 -- accel/accel.sh@20 -- # read -r var val 00:07:10.719 14:02:13 -- accel/accel.sh@21 -- # val=32 00:07:10.719 14:02:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.719 14:02:13 -- accel/accel.sh@20 -- # IFS=: 00:07:10.719 14:02:13 -- accel/accel.sh@20 -- # read -r var val 00:07:10.719 14:02:13 -- accel/accel.sh@21 -- # val=1 00:07:10.719 14:02:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.719 14:02:13 -- accel/accel.sh@20 -- # IFS=: 00:07:10.719 14:02:13 -- accel/accel.sh@20 -- # read -r var val 00:07:10.720 14:02:13 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:10.720 14:02:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.720 14:02:13 -- accel/accel.sh@20 -- # IFS=: 00:07:10.720 14:02:13 -- accel/accel.sh@20 -- # read -r var val 00:07:10.720 14:02:13 -- accel/accel.sh@21 -- # val=Yes 00:07:10.720 14:02:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.720 14:02:13 -- accel/accel.sh@20 -- # IFS=: 00:07:10.720 14:02:13 -- accel/accel.sh@20 -- # read -r var val 00:07:10.720 14:02:13 -- accel/accel.sh@21 -- # val= 00:07:10.720 14:02:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.720 14:02:13 -- accel/accel.sh@20 -- # IFS=: 00:07:10.720 14:02:13 -- accel/accel.sh@20 -- # read -r var val 00:07:10.720 14:02:13 -- accel/accel.sh@21 -- # val= 00:07:10.720 14:02:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.720 14:02:13 -- accel/accel.sh@20 -- # IFS=: 00:07:10.720 14:02:13 -- accel/accel.sh@20 -- # read -r var val 00:07:12.094 14:02:14 -- accel/accel.sh@21 -- # val= 00:07:12.094 14:02:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.095 14:02:14 -- accel/accel.sh@20 -- # IFS=: 00:07:12.095 14:02:14 -- accel/accel.sh@20 -- # read -r var val 00:07:12.095 14:02:14 -- accel/accel.sh@21 -- # val= 00:07:12.095 14:02:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.095 14:02:14 -- accel/accel.sh@20 -- # IFS=: 00:07:12.095 14:02:14 -- accel/accel.sh@20 -- # read -r var val 00:07:12.095 14:02:14 -- accel/accel.sh@21 -- # val= 00:07:12.095 14:02:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.095 14:02:14 -- accel/accel.sh@20 -- # IFS=: 00:07:12.095 14:02:14 -- accel/accel.sh@20 -- # read -r var val 00:07:12.095 14:02:14 -- accel/accel.sh@21 -- # val= 00:07:12.095 14:02:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.095 14:02:14 -- accel/accel.sh@20 -- # IFS=: 00:07:12.095 14:02:14 -- accel/accel.sh@20 -- # read -r var val 00:07:12.095 14:02:14 -- accel/accel.sh@21 -- # val= 00:07:12.095 14:02:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.095 14:02:14 -- accel/accel.sh@20 -- # IFS=: 00:07:12.095 14:02:14 -- accel/accel.sh@20 -- # read -r var val 00:07:12.095 14:02:14 -- accel/accel.sh@21 -- # val= 00:07:12.095 14:02:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.095 14:02:14 -- accel/accel.sh@20 -- # IFS=: 00:07:12.095 14:02:14 -- accel/accel.sh@20 -- # read -r var val 00:07:12.095 ************************************ 00:07:12.095 END TEST accel_decomp 00:07:12.095 ************************************ 00:07:12.095 14:02:15 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:12.095 14:02:15 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:12.095 14:02:15 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.095 00:07:12.095 real 0m3.839s 00:07:12.095 user 0m3.389s 00:07:12.095 sys 0m0.238s 00:07:12.095 14:02:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:12.095 14:02:15 -- common/autotest_common.sh@10 -- # set +x 00:07:12.353 14:02:15 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:12.353 14:02:15 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:12.354 14:02:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:12.354 14:02:15 -- common/autotest_common.sh@10 -- # set +x 00:07:12.354 ************************************ 00:07:12.354 START TEST accel_decmop_full 00:07:12.354 ************************************ 00:07:12.354 14:02:15 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:12.354 14:02:15 -- accel/accel.sh@16 -- # local accel_opc 00:07:12.354 14:02:15 -- accel/accel.sh@17 -- # local accel_module 00:07:12.354 14:02:15 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:12.354 14:02:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:12.354 14:02:15 -- accel/accel.sh@12 -- # build_accel_config 00:07:12.354 14:02:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:12.354 14:02:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.354 14:02:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.354 14:02:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:12.354 14:02:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:12.354 14:02:15 -- accel/accel.sh@41 -- # local IFS=, 00:07:12.354 14:02:15 -- accel/accel.sh@42 -- # jq -r . 00:07:12.354 [2024-12-08 14:02:15.083033] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:12.354 [2024-12-08 14:02:15.083266] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59524 ] 00:07:12.354 [2024-12-08 14:02:15.226271] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.611 [2024-12-08 14:02:15.398680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.516 14:02:17 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:14.516 00:07:14.516 SPDK Configuration: 00:07:14.516 Core mask: 0x1 00:07:14.516 00:07:14.516 Accel Perf Configuration: 00:07:14.516 Workload Type: decompress 00:07:14.516 Transfer size: 111250 bytes 00:07:14.516 Vector count 1 00:07:14.516 Module: software 00:07:14.516 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:14.516 Queue depth: 32 00:07:14.516 Allocate depth: 32 00:07:14.516 # threads/core: 1 00:07:14.516 Run time: 1 seconds 00:07:14.516 Verify: Yes 00:07:14.516 00:07:14.516 Running for 1 seconds... 00:07:14.516 00:07:14.516 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:14.516 ------------------------------------------------------------------------------------ 00:07:14.516 0,0 4256/s 175 MiB/s 0 0 00:07:14.516 ==================================================================================== 00:07:14.516 Total 4256/s 451 MiB/s 0 0' 00:07:14.516 14:02:17 -- accel/accel.sh@20 -- # IFS=: 00:07:14.516 14:02:17 -- accel/accel.sh@20 -- # read -r var val 00:07:14.516 14:02:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:14.516 14:02:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:14.516 14:02:17 -- accel/accel.sh@12 -- # build_accel_config 00:07:14.516 14:02:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:14.516 14:02:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.516 14:02:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.516 14:02:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:14.516 14:02:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:14.516 14:02:17 -- accel/accel.sh@41 -- # local IFS=, 00:07:14.516 14:02:17 -- accel/accel.sh@42 -- # jq -r . 00:07:14.516 [2024-12-08 14:02:17.075655] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:14.516 [2024-12-08 14:02:17.075759] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59550 ] 00:07:14.516 [2024-12-08 14:02:17.227994] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.516 [2024-12-08 14:02:17.373211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.776 14:02:17 -- accel/accel.sh@21 -- # val= 00:07:14.776 14:02:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.776 14:02:17 -- accel/accel.sh@20 -- # IFS=: 00:07:14.776 14:02:17 -- accel/accel.sh@20 -- # read -r var val 00:07:14.776 14:02:17 -- accel/accel.sh@21 -- # val= 00:07:14.776 14:02:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.776 14:02:17 -- accel/accel.sh@20 -- # IFS=: 00:07:14.776 14:02:17 -- accel/accel.sh@20 -- # read -r var val 00:07:14.776 14:02:17 -- accel/accel.sh@21 -- # val= 00:07:14.776 14:02:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.776 14:02:17 -- accel/accel.sh@20 -- # IFS=: 00:07:14.776 14:02:17 -- accel/accel.sh@20 -- # read -r var val 00:07:14.776 14:02:17 -- accel/accel.sh@21 -- # val=0x1 00:07:14.776 14:02:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.776 14:02:17 -- accel/accel.sh@20 -- # IFS=: 00:07:14.776 14:02:17 -- accel/accel.sh@20 -- # read -r var val 00:07:14.776 14:02:17 -- accel/accel.sh@21 -- # val= 00:07:14.776 14:02:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.776 14:02:17 -- accel/accel.sh@20 -- # IFS=: 00:07:14.776 14:02:17 -- accel/accel.sh@20 -- # read -r var val 00:07:14.776 14:02:17 -- accel/accel.sh@21 -- # val= 00:07:14.776 14:02:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.776 14:02:17 -- accel/accel.sh@20 -- # IFS=: 00:07:14.776 14:02:17 -- accel/accel.sh@20 -- # read -r var val 00:07:14.776 14:02:17 -- accel/accel.sh@21 -- # val=decompress 00:07:14.776 14:02:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.776 14:02:17 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:14.776 14:02:17 -- accel/accel.sh@20 -- # IFS=: 00:07:14.776 14:02:17 -- accel/accel.sh@20 -- # read -r var val 00:07:14.776 14:02:17 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:14.776 14:02:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.776 14:02:17 -- accel/accel.sh@20 -- # IFS=: 00:07:14.776 14:02:17 -- accel/accel.sh@20 -- # read -r var val 00:07:14.776 14:02:17 -- accel/accel.sh@21 -- # val= 00:07:14.776 14:02:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.776 14:02:17 -- accel/accel.sh@20 -- # IFS=: 00:07:14.776 14:02:17 -- accel/accel.sh@20 -- # read -r var val 00:07:14.776 14:02:17 -- accel/accel.sh@21 -- # val=software 00:07:14.776 14:02:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.776 14:02:17 -- accel/accel.sh@23 -- # accel_module=software 00:07:14.776 14:02:17 -- accel/accel.sh@20 -- # IFS=: 00:07:14.776 14:02:17 -- accel/accel.sh@20 -- # read -r var val 00:07:14.776 14:02:17 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:14.776 14:02:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.776 14:02:17 -- accel/accel.sh@20 -- # IFS=: 00:07:14.776 14:02:17 -- accel/accel.sh@20 -- # read -r var val 00:07:14.776 14:02:17 -- accel/accel.sh@21 -- # val=32 00:07:14.776 14:02:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.776 14:02:17 -- accel/accel.sh@20 -- # IFS=: 00:07:14.776 14:02:17 -- accel/accel.sh@20 -- # read -r var val 00:07:14.776 14:02:17 -- accel/accel.sh@21 -- # val=32 00:07:14.776 14:02:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.776 14:02:17 -- accel/accel.sh@20 -- # IFS=: 00:07:14.776 14:02:17 -- accel/accel.sh@20 -- # read -r var val 00:07:14.776 14:02:17 -- accel/accel.sh@21 -- # val=1 00:07:14.776 14:02:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.776 14:02:17 -- accel/accel.sh@20 -- # IFS=: 00:07:14.776 14:02:17 -- accel/accel.sh@20 -- # read -r var val 00:07:14.776 14:02:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:14.776 14:02:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.776 14:02:17 -- accel/accel.sh@20 -- # IFS=: 00:07:14.776 14:02:17 -- accel/accel.sh@20 -- # read -r var val 00:07:14.776 14:02:17 -- accel/accel.sh@21 -- # val=Yes 00:07:14.776 14:02:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.776 14:02:17 -- accel/accel.sh@20 -- # IFS=: 00:07:14.776 14:02:17 -- accel/accel.sh@20 -- # read -r var val 00:07:14.776 14:02:17 -- accel/accel.sh@21 -- # val= 00:07:14.776 14:02:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.776 14:02:17 -- accel/accel.sh@20 -- # IFS=: 00:07:14.776 14:02:17 -- accel/accel.sh@20 -- # read -r var val 00:07:14.776 14:02:17 -- accel/accel.sh@21 -- # val= 00:07:14.776 14:02:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.776 14:02:17 -- accel/accel.sh@20 -- # IFS=: 00:07:14.776 14:02:17 -- accel/accel.sh@20 -- # read -r var val 00:07:16.150 14:02:18 -- accel/accel.sh@21 -- # val= 00:07:16.150 14:02:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.150 14:02:18 -- accel/accel.sh@20 -- # IFS=: 00:07:16.150 14:02:18 -- accel/accel.sh@20 -- # read -r var val 00:07:16.150 14:02:18 -- accel/accel.sh@21 -- # val= 00:07:16.150 14:02:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.150 14:02:18 -- accel/accel.sh@20 -- # IFS=: 00:07:16.150 14:02:18 -- accel/accel.sh@20 -- # read -r var val 00:07:16.150 14:02:18 -- accel/accel.sh@21 -- # val= 00:07:16.150 14:02:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.150 14:02:18 -- accel/accel.sh@20 -- # IFS=: 00:07:16.150 14:02:18 -- accel/accel.sh@20 -- # read -r var val 00:07:16.150 14:02:18 -- accel/accel.sh@21 -- # val= 00:07:16.150 14:02:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.150 14:02:18 -- accel/accel.sh@20 -- # IFS=: 00:07:16.150 14:02:18 -- accel/accel.sh@20 -- # read -r var val 00:07:16.150 14:02:18 -- accel/accel.sh@21 -- # val= 00:07:16.150 14:02:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.150 14:02:18 -- accel/accel.sh@20 -- # IFS=: 00:07:16.150 14:02:18 -- accel/accel.sh@20 -- # read -r var val 00:07:16.150 14:02:18 -- accel/accel.sh@21 -- # val= 00:07:16.150 14:02:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.150 14:02:18 -- accel/accel.sh@20 -- # IFS=: 00:07:16.150 14:02:18 -- accel/accel.sh@20 -- # read -r var val 00:07:16.150 14:02:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:16.150 14:02:18 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:16.150 14:02:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.150 00:07:16.150 real 0m3.927s 00:07:16.150 user 0m3.474s 00:07:16.150 sys 0m0.241s 00:07:16.150 14:02:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:16.150 14:02:18 -- common/autotest_common.sh@10 -- # set +x 00:07:16.150 ************************************ 00:07:16.150 END TEST accel_decmop_full 00:07:16.150 ************************************ 00:07:16.150 14:02:19 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:16.150 14:02:19 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:16.150 14:02:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:16.150 14:02:19 -- common/autotest_common.sh@10 -- # set +x 00:07:16.150 ************************************ 00:07:16.150 START TEST accel_decomp_mcore 00:07:16.150 ************************************ 00:07:16.150 14:02:19 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:16.151 14:02:19 -- accel/accel.sh@16 -- # local accel_opc 00:07:16.151 14:02:19 -- accel/accel.sh@17 -- # local accel_module 00:07:16.151 14:02:19 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:16.151 14:02:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:16.151 14:02:19 -- accel/accel.sh@12 -- # build_accel_config 00:07:16.151 14:02:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:16.151 14:02:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.151 14:02:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.151 14:02:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:16.151 14:02:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:16.151 14:02:19 -- accel/accel.sh@41 -- # local IFS=, 00:07:16.151 14:02:19 -- accel/accel.sh@42 -- # jq -r . 00:07:16.151 [2024-12-08 14:02:19.065087] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:16.151 [2024-12-08 14:02:19.065183] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59591 ] 00:07:16.412 [2024-12-08 14:02:19.214370] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:16.673 [2024-12-08 14:02:19.424161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.673 [2024-12-08 14:02:19.424288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:16.673 [2024-12-08 14:02:19.424681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:16.673 [2024-12-08 14:02:19.424685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.588 14:02:21 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:18.588 00:07:18.588 SPDK Configuration: 00:07:18.588 Core mask: 0xf 00:07:18.588 00:07:18.588 Accel Perf Configuration: 00:07:18.588 Workload Type: decompress 00:07:18.588 Transfer size: 4096 bytes 00:07:18.588 Vector count 1 00:07:18.588 Module: software 00:07:18.588 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:18.588 Queue depth: 32 00:07:18.588 Allocate depth: 32 00:07:18.588 # threads/core: 1 00:07:18.588 Run time: 1 seconds 00:07:18.588 Verify: Yes 00:07:18.588 00:07:18.588 Running for 1 seconds... 00:07:18.588 00:07:18.588 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:18.588 ------------------------------------------------------------------------------------ 00:07:18.588 0,0 54912/s 101 MiB/s 0 0 00:07:18.588 3,0 55488/s 102 MiB/s 0 0 00:07:18.588 2,0 55392/s 102 MiB/s 0 0 00:07:18.588 1,0 55296/s 101 MiB/s 0 0 00:07:18.588 ==================================================================================== 00:07:18.588 Total 221088/s 863 MiB/s 0 0' 00:07:18.588 14:02:21 -- accel/accel.sh@20 -- # IFS=: 00:07:18.588 14:02:21 -- accel/accel.sh@20 -- # read -r var val 00:07:18.588 14:02:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:18.588 14:02:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:18.588 14:02:21 -- accel/accel.sh@12 -- # build_accel_config 00:07:18.588 14:02:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:18.588 14:02:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.588 14:02:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.588 14:02:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:18.588 14:02:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:18.588 14:02:21 -- accel/accel.sh@41 -- # local IFS=, 00:07:18.588 14:02:21 -- accel/accel.sh@42 -- # jq -r . 00:07:18.588 [2024-12-08 14:02:21.247270] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:18.588 [2024-12-08 14:02:21.247771] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59620 ] 00:07:18.588 [2024-12-08 14:02:21.398743] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:18.847 [2024-12-08 14:02:21.631047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.847 [2024-12-08 14:02:21.631549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:18.847 [2024-12-08 14:02:21.631249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:18.847 [2024-12-08 14:02:21.631658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.105 14:02:21 -- accel/accel.sh@21 -- # val= 00:07:19.105 14:02:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.105 14:02:21 -- accel/accel.sh@20 -- # IFS=: 00:07:19.105 14:02:21 -- accel/accel.sh@20 -- # read -r var val 00:07:19.105 14:02:21 -- accel/accel.sh@21 -- # val= 00:07:19.105 14:02:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.105 14:02:21 -- accel/accel.sh@20 -- # IFS=: 00:07:19.105 14:02:21 -- accel/accel.sh@20 -- # read -r var val 00:07:19.105 14:02:21 -- accel/accel.sh@21 -- # val= 00:07:19.105 14:02:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.105 14:02:21 -- accel/accel.sh@20 -- # IFS=: 00:07:19.105 14:02:21 -- accel/accel.sh@20 -- # read -r var val 00:07:19.105 14:02:21 -- accel/accel.sh@21 -- # val=0xf 00:07:19.105 14:02:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.105 14:02:21 -- accel/accel.sh@20 -- # IFS=: 00:07:19.105 14:02:21 -- accel/accel.sh@20 -- # read -r var val 00:07:19.105 14:02:21 -- accel/accel.sh@21 -- # val= 00:07:19.105 14:02:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.105 14:02:21 -- accel/accel.sh@20 -- # IFS=: 00:07:19.105 14:02:21 -- accel/accel.sh@20 -- # read -r var val 00:07:19.105 14:02:21 -- accel/accel.sh@21 -- # val= 00:07:19.105 14:02:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.105 14:02:21 -- accel/accel.sh@20 -- # IFS=: 00:07:19.105 14:02:21 -- accel/accel.sh@20 -- # read -r var val 00:07:19.105 14:02:21 -- accel/accel.sh@21 -- # val=decompress 00:07:19.105 14:02:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.105 14:02:21 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:19.105 14:02:21 -- accel/accel.sh@20 -- # IFS=: 00:07:19.105 14:02:21 -- accel/accel.sh@20 -- # read -r var val 00:07:19.105 14:02:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:19.105 14:02:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.105 14:02:21 -- accel/accel.sh@20 -- # IFS=: 00:07:19.105 14:02:21 -- accel/accel.sh@20 -- # read -r var val 00:07:19.105 14:02:21 -- accel/accel.sh@21 -- # val= 00:07:19.105 14:02:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.105 14:02:21 -- accel/accel.sh@20 -- # IFS=: 00:07:19.105 14:02:21 -- accel/accel.sh@20 -- # read -r var val 00:07:19.105 14:02:21 -- accel/accel.sh@21 -- # val=software 00:07:19.105 14:02:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.105 14:02:21 -- accel/accel.sh@23 -- # accel_module=software 00:07:19.105 14:02:21 -- accel/accel.sh@20 -- # IFS=: 00:07:19.105 14:02:21 -- accel/accel.sh@20 -- # read -r var val 00:07:19.105 14:02:21 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:19.105 14:02:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.105 14:02:21 -- accel/accel.sh@20 -- # IFS=: 00:07:19.105 14:02:21 -- accel/accel.sh@20 -- # read -r var val 00:07:19.105 14:02:21 -- accel/accel.sh@21 -- # val=32 00:07:19.105 14:02:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.105 14:02:21 -- accel/accel.sh@20 -- # IFS=: 00:07:19.105 14:02:21 -- accel/accel.sh@20 -- # read -r var val 00:07:19.105 14:02:21 -- accel/accel.sh@21 -- # val=32 00:07:19.105 14:02:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.105 14:02:21 -- accel/accel.sh@20 -- # IFS=: 00:07:19.105 14:02:21 -- accel/accel.sh@20 -- # read -r var val 00:07:19.105 14:02:21 -- accel/accel.sh@21 -- # val=1 00:07:19.105 14:02:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.105 14:02:21 -- accel/accel.sh@20 -- # IFS=: 00:07:19.105 14:02:21 -- accel/accel.sh@20 -- # read -r var val 00:07:19.105 14:02:21 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:19.105 14:02:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.105 14:02:21 -- accel/accel.sh@20 -- # IFS=: 00:07:19.105 14:02:21 -- accel/accel.sh@20 -- # read -r var val 00:07:19.105 14:02:21 -- accel/accel.sh@21 -- # val=Yes 00:07:19.105 14:02:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.105 14:02:21 -- accel/accel.sh@20 -- # IFS=: 00:07:19.105 14:02:21 -- accel/accel.sh@20 -- # read -r var val 00:07:19.105 14:02:21 -- accel/accel.sh@21 -- # val= 00:07:19.105 14:02:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.105 14:02:21 -- accel/accel.sh@20 -- # IFS=: 00:07:19.105 14:02:21 -- accel/accel.sh@20 -- # read -r var val 00:07:19.105 14:02:21 -- accel/accel.sh@21 -- # val= 00:07:19.105 14:02:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.105 14:02:21 -- accel/accel.sh@20 -- # IFS=: 00:07:19.105 14:02:21 -- accel/accel.sh@20 -- # read -r var val 00:07:20.485 14:02:23 -- accel/accel.sh@21 -- # val= 00:07:20.485 14:02:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.485 14:02:23 -- accel/accel.sh@20 -- # IFS=: 00:07:20.485 14:02:23 -- accel/accel.sh@20 -- # read -r var val 00:07:20.485 14:02:23 -- accel/accel.sh@21 -- # val= 00:07:20.485 14:02:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.485 14:02:23 -- accel/accel.sh@20 -- # IFS=: 00:07:20.485 14:02:23 -- accel/accel.sh@20 -- # read -r var val 00:07:20.485 14:02:23 -- accel/accel.sh@21 -- # val= 00:07:20.485 14:02:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.485 14:02:23 -- accel/accel.sh@20 -- # IFS=: 00:07:20.485 14:02:23 -- accel/accel.sh@20 -- # read -r var val 00:07:20.485 14:02:23 -- accel/accel.sh@21 -- # val= 00:07:20.485 14:02:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.485 14:02:23 -- accel/accel.sh@20 -- # IFS=: 00:07:20.485 14:02:23 -- accel/accel.sh@20 -- # read -r var val 00:07:20.485 14:02:23 -- accel/accel.sh@21 -- # val= 00:07:20.485 14:02:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.485 14:02:23 -- accel/accel.sh@20 -- # IFS=: 00:07:20.485 14:02:23 -- accel/accel.sh@20 -- # read -r var val 00:07:20.485 14:02:23 -- accel/accel.sh@21 -- # val= 00:07:20.485 14:02:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.485 14:02:23 -- accel/accel.sh@20 -- # IFS=: 00:07:20.485 14:02:23 -- accel/accel.sh@20 -- # read -r var val 00:07:20.485 14:02:23 -- accel/accel.sh@21 -- # val= 00:07:20.485 14:02:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.485 14:02:23 -- accel/accel.sh@20 -- # IFS=: 00:07:20.485 14:02:23 -- accel/accel.sh@20 -- # read -r var val 00:07:20.485 14:02:23 -- accel/accel.sh@21 -- # val= 00:07:20.485 14:02:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.485 14:02:23 -- accel/accel.sh@20 -- # IFS=: 00:07:20.485 14:02:23 -- accel/accel.sh@20 -- # read -r var val 00:07:20.485 14:02:23 -- accel/accel.sh@21 -- # val= 00:07:20.485 14:02:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.485 14:02:23 -- accel/accel.sh@20 -- # IFS=: 00:07:20.485 14:02:23 -- accel/accel.sh@20 -- # read -r var val 00:07:20.485 ************************************ 00:07:20.485 END TEST accel_decomp_mcore 00:07:20.485 ************************************ 00:07:20.485 14:02:23 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:20.485 14:02:23 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:20.485 14:02:23 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:20.485 00:07:20.485 real 0m4.353s 00:07:20.485 user 0m6.547s 00:07:20.485 sys 0m0.192s 00:07:20.485 14:02:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:20.485 14:02:23 -- common/autotest_common.sh@10 -- # set +x 00:07:20.745 14:02:23 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:20.745 14:02:23 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:20.745 14:02:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:20.745 14:02:23 -- common/autotest_common.sh@10 -- # set +x 00:07:20.745 ************************************ 00:07:20.745 START TEST accel_decomp_full_mcore 00:07:20.745 ************************************ 00:07:20.745 14:02:23 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:20.745 14:02:23 -- accel/accel.sh@16 -- # local accel_opc 00:07:20.745 14:02:23 -- accel/accel.sh@17 -- # local accel_module 00:07:20.745 14:02:23 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:20.745 14:02:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:20.745 14:02:23 -- accel/accel.sh@12 -- # build_accel_config 00:07:20.745 14:02:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:20.745 14:02:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.745 14:02:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.745 14:02:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:20.745 14:02:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:20.745 14:02:23 -- accel/accel.sh@41 -- # local IFS=, 00:07:20.745 14:02:23 -- accel/accel.sh@42 -- # jq -r . 00:07:20.746 [2024-12-08 14:02:23.490435] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:20.746 [2024-12-08 14:02:23.490740] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59664 ] 00:07:20.746 [2024-12-08 14:02:23.643327] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:21.006 [2024-12-08 14:02:23.907138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.006 [2024-12-08 14:02:23.907451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:21.006 [2024-12-08 14:02:23.907830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:21.006 [2024-12-08 14:02:23.907934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.920 14:02:25 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:22.920 00:07:22.920 SPDK Configuration: 00:07:22.920 Core mask: 0xf 00:07:22.920 00:07:22.920 Accel Perf Configuration: 00:07:22.920 Workload Type: decompress 00:07:22.920 Transfer size: 111250 bytes 00:07:22.920 Vector count 1 00:07:22.920 Module: software 00:07:22.920 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:22.920 Queue depth: 32 00:07:22.920 Allocate depth: 32 00:07:22.920 # threads/core: 1 00:07:22.920 Run time: 1 seconds 00:07:22.920 Verify: Yes 00:07:22.920 00:07:22.920 Running for 1 seconds... 00:07:22.920 00:07:22.920 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:22.920 ------------------------------------------------------------------------------------ 00:07:22.920 0,0 4160/s 171 MiB/s 0 0 00:07:22.920 3,0 4544/s 187 MiB/s 0 0 00:07:22.920 2,0 4192/s 173 MiB/s 0 0 00:07:22.920 1,0 4160/s 171 MiB/s 0 0 00:07:22.920 ==================================================================================== 00:07:22.920 Total 17056/s 1809 MiB/s 0 0' 00:07:22.920 14:02:25 -- accel/accel.sh@20 -- # IFS=: 00:07:22.920 14:02:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:22.920 14:02:25 -- accel/accel.sh@20 -- # read -r var val 00:07:22.920 14:02:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:22.920 14:02:25 -- accel/accel.sh@12 -- # build_accel_config 00:07:22.920 14:02:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:22.920 14:02:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.920 14:02:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.920 14:02:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:22.920 14:02:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:22.920 14:02:25 -- accel/accel.sh@41 -- # local IFS=, 00:07:22.920 14:02:25 -- accel/accel.sh@42 -- # jq -r . 00:07:23.179 [2024-12-08 14:02:25.861520] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:23.179 [2024-12-08 14:02:25.861644] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59699 ] 00:07:23.179 [2024-12-08 14:02:26.011555] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:23.437 [2024-12-08 14:02:26.292850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:23.437 [2024-12-08 14:02:26.292928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:23.437 [2024-12-08 14:02:26.293205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:23.437 [2024-12-08 14:02:26.293260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.709 14:02:26 -- accel/accel.sh@21 -- # val= 00:07:23.709 14:02:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.709 14:02:26 -- accel/accel.sh@20 -- # IFS=: 00:07:23.709 14:02:26 -- accel/accel.sh@20 -- # read -r var val 00:07:23.709 14:02:26 -- accel/accel.sh@21 -- # val= 00:07:23.709 14:02:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.709 14:02:26 -- accel/accel.sh@20 -- # IFS=: 00:07:23.709 14:02:26 -- accel/accel.sh@20 -- # read -r var val 00:07:23.709 14:02:26 -- accel/accel.sh@21 -- # val= 00:07:23.709 14:02:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.709 14:02:26 -- accel/accel.sh@20 -- # IFS=: 00:07:23.709 14:02:26 -- accel/accel.sh@20 -- # read -r var val 00:07:23.709 14:02:26 -- accel/accel.sh@21 -- # val=0xf 00:07:23.709 14:02:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.709 14:02:26 -- accel/accel.sh@20 -- # IFS=: 00:07:23.709 14:02:26 -- accel/accel.sh@20 -- # read -r var val 00:07:23.709 14:02:26 -- accel/accel.sh@21 -- # val= 00:07:23.709 14:02:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.709 14:02:26 -- accel/accel.sh@20 -- # IFS=: 00:07:23.709 14:02:26 -- accel/accel.sh@20 -- # read -r var val 00:07:23.709 14:02:26 -- accel/accel.sh@21 -- # val= 00:07:23.709 14:02:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.709 14:02:26 -- accel/accel.sh@20 -- # IFS=: 00:07:23.709 14:02:26 -- accel/accel.sh@20 -- # read -r var val 00:07:23.709 14:02:26 -- accel/accel.sh@21 -- # val=decompress 00:07:23.709 14:02:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.709 14:02:26 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:23.709 14:02:26 -- accel/accel.sh@20 -- # IFS=: 00:07:23.709 14:02:26 -- accel/accel.sh@20 -- # read -r var val 00:07:23.709 14:02:26 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:23.709 14:02:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.709 14:02:26 -- accel/accel.sh@20 -- # IFS=: 00:07:23.709 14:02:26 -- accel/accel.sh@20 -- # read -r var val 00:07:23.709 14:02:26 -- accel/accel.sh@21 -- # val= 00:07:23.709 14:02:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.709 14:02:26 -- accel/accel.sh@20 -- # IFS=: 00:07:23.709 14:02:26 -- accel/accel.sh@20 -- # read -r var val 00:07:23.709 14:02:26 -- accel/accel.sh@21 -- # val=software 00:07:23.709 14:02:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.709 14:02:26 -- accel/accel.sh@23 -- # accel_module=software 00:07:23.709 14:02:26 -- accel/accel.sh@20 -- # IFS=: 00:07:23.709 14:02:26 -- accel/accel.sh@20 -- # read -r var val 00:07:23.709 14:02:26 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:23.709 14:02:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.709 14:02:26 -- accel/accel.sh@20 -- # IFS=: 00:07:23.709 14:02:26 -- accel/accel.sh@20 -- # read -r var val 00:07:23.709 14:02:26 -- accel/accel.sh@21 -- # val=32 00:07:23.709 14:02:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.709 14:02:26 -- accel/accel.sh@20 -- # IFS=: 00:07:23.709 14:02:26 -- accel/accel.sh@20 -- # read -r var val 00:07:23.709 14:02:26 -- accel/accel.sh@21 -- # val=32 00:07:23.709 14:02:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.709 14:02:26 -- accel/accel.sh@20 -- # IFS=: 00:07:23.709 14:02:26 -- accel/accel.sh@20 -- # read -r var val 00:07:23.709 14:02:26 -- accel/accel.sh@21 -- # val=1 00:07:23.709 14:02:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.709 14:02:26 -- accel/accel.sh@20 -- # IFS=: 00:07:23.709 14:02:26 -- accel/accel.sh@20 -- # read -r var val 00:07:23.709 14:02:26 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:23.709 14:02:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.709 14:02:26 -- accel/accel.sh@20 -- # IFS=: 00:07:23.709 14:02:26 -- accel/accel.sh@20 -- # read -r var val 00:07:23.709 14:02:26 -- accel/accel.sh@21 -- # val=Yes 00:07:23.709 14:02:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.709 14:02:26 -- accel/accel.sh@20 -- # IFS=: 00:07:23.709 14:02:26 -- accel/accel.sh@20 -- # read -r var val 00:07:23.709 14:02:26 -- accel/accel.sh@21 -- # val= 00:07:23.709 14:02:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.709 14:02:26 -- accel/accel.sh@20 -- # IFS=: 00:07:23.709 14:02:26 -- accel/accel.sh@20 -- # read -r var val 00:07:23.709 14:02:26 -- accel/accel.sh@21 -- # val= 00:07:23.709 14:02:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.709 14:02:26 -- accel/accel.sh@20 -- # IFS=: 00:07:23.709 14:02:26 -- accel/accel.sh@20 -- # read -r var val 00:07:25.139 14:02:28 -- accel/accel.sh@21 -- # val= 00:07:25.139 14:02:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.139 14:02:28 -- accel/accel.sh@20 -- # IFS=: 00:07:25.139 14:02:28 -- accel/accel.sh@20 -- # read -r var val 00:07:25.139 14:02:28 -- accel/accel.sh@21 -- # val= 00:07:25.139 14:02:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.139 14:02:28 -- accel/accel.sh@20 -- # IFS=: 00:07:25.139 14:02:28 -- accel/accel.sh@20 -- # read -r var val 00:07:25.139 14:02:28 -- accel/accel.sh@21 -- # val= 00:07:25.139 14:02:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.139 14:02:28 -- accel/accel.sh@20 -- # IFS=: 00:07:25.139 14:02:28 -- accel/accel.sh@20 -- # read -r var val 00:07:25.139 14:02:28 -- accel/accel.sh@21 -- # val= 00:07:25.139 14:02:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.139 14:02:28 -- accel/accel.sh@20 -- # IFS=: 00:07:25.139 14:02:28 -- accel/accel.sh@20 -- # read -r var val 00:07:25.139 14:02:28 -- accel/accel.sh@21 -- # val= 00:07:25.139 14:02:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.139 14:02:28 -- accel/accel.sh@20 -- # IFS=: 00:07:25.139 14:02:28 -- accel/accel.sh@20 -- # read -r var val 00:07:25.139 14:02:28 -- accel/accel.sh@21 -- # val= 00:07:25.139 14:02:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.139 14:02:28 -- accel/accel.sh@20 -- # IFS=: 00:07:25.139 14:02:28 -- accel/accel.sh@20 -- # read -r var val 00:07:25.139 14:02:28 -- accel/accel.sh@21 -- # val= 00:07:25.139 14:02:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.139 14:02:28 -- accel/accel.sh@20 -- # IFS=: 00:07:25.139 14:02:28 -- accel/accel.sh@20 -- # read -r var val 00:07:25.139 14:02:28 -- accel/accel.sh@21 -- # val= 00:07:25.139 14:02:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.139 14:02:28 -- accel/accel.sh@20 -- # IFS=: 00:07:25.139 14:02:28 -- accel/accel.sh@20 -- # read -r var val 00:07:25.139 14:02:28 -- accel/accel.sh@21 -- # val= 00:07:25.139 14:02:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.139 14:02:28 -- accel/accel.sh@20 -- # IFS=: 00:07:25.139 14:02:28 -- accel/accel.sh@20 -- # read -r var val 00:07:25.139 ************************************ 00:07:25.139 END TEST accel_decomp_full_mcore 00:07:25.139 ************************************ 00:07:25.139 14:02:28 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:25.139 14:02:28 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:25.139 14:02:28 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.139 00:07:25.139 real 0m4.582s 00:07:25.139 user 0m6.783s 00:07:25.139 sys 0m0.231s 00:07:25.139 14:02:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:25.139 14:02:28 -- common/autotest_common.sh@10 -- # set +x 00:07:25.399 14:02:28 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:25.399 14:02:28 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:25.399 14:02:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:25.399 14:02:28 -- common/autotest_common.sh@10 -- # set +x 00:07:25.399 ************************************ 00:07:25.399 START TEST accel_decomp_mthread 00:07:25.399 ************************************ 00:07:25.399 14:02:28 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:25.399 14:02:28 -- accel/accel.sh@16 -- # local accel_opc 00:07:25.399 14:02:28 -- accel/accel.sh@17 -- # local accel_module 00:07:25.399 14:02:28 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:25.399 14:02:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:25.399 14:02:28 -- accel/accel.sh@12 -- # build_accel_config 00:07:25.399 14:02:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:25.399 14:02:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.399 14:02:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.399 14:02:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:25.399 14:02:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:25.399 14:02:28 -- accel/accel.sh@41 -- # local IFS=, 00:07:25.399 14:02:28 -- accel/accel.sh@42 -- # jq -r . 00:07:25.399 [2024-12-08 14:02:28.119097] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:25.399 [2024-12-08 14:02:28.119175] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59743 ] 00:07:25.399 [2024-12-08 14:02:28.260200] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.658 [2024-12-08 14:02:28.440764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.560 14:02:30 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:27.560 00:07:27.560 SPDK Configuration: 00:07:27.560 Core mask: 0x1 00:07:27.560 00:07:27.560 Accel Perf Configuration: 00:07:27.560 Workload Type: decompress 00:07:27.560 Transfer size: 4096 bytes 00:07:27.560 Vector count 1 00:07:27.560 Module: software 00:07:27.560 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:27.560 Queue depth: 32 00:07:27.560 Allocate depth: 32 00:07:27.560 # threads/core: 2 00:07:27.560 Run time: 1 seconds 00:07:27.560 Verify: Yes 00:07:27.560 00:07:27.560 Running for 1 seconds... 00:07:27.560 00:07:27.560 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:27.560 ------------------------------------------------------------------------------------ 00:07:27.560 0,1 37056/s 68 MiB/s 0 0 00:07:27.560 0,0 36960/s 68 MiB/s 0 0 00:07:27.560 ==================================================================================== 00:07:27.560 Total 74016/s 289 MiB/s 0 0' 00:07:27.560 14:02:30 -- accel/accel.sh@20 -- # IFS=: 00:07:27.560 14:02:30 -- accel/accel.sh@20 -- # read -r var val 00:07:27.560 14:02:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:27.560 14:02:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:27.560 14:02:30 -- accel/accel.sh@12 -- # build_accel_config 00:07:27.560 14:02:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:27.560 14:02:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.560 14:02:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.560 14:02:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:27.560 14:02:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:27.560 14:02:30 -- accel/accel.sh@41 -- # local IFS=, 00:07:27.560 14:02:30 -- accel/accel.sh@42 -- # jq -r . 00:07:27.560 [2024-12-08 14:02:30.238721] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:27.560 [2024-12-08 14:02:30.238807] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59769 ] 00:07:27.560 [2024-12-08 14:02:30.385152] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.818 [2024-12-08 14:02:30.590907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.075 14:02:30 -- accel/accel.sh@21 -- # val= 00:07:28.076 14:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.076 14:02:30 -- accel/accel.sh@20 -- # IFS=: 00:07:28.076 14:02:30 -- accel/accel.sh@20 -- # read -r var val 00:07:28.076 14:02:30 -- accel/accel.sh@21 -- # val= 00:07:28.076 14:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.076 14:02:30 -- accel/accel.sh@20 -- # IFS=: 00:07:28.076 14:02:30 -- accel/accel.sh@20 -- # read -r var val 00:07:28.076 14:02:30 -- accel/accel.sh@21 -- # val= 00:07:28.076 14:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.076 14:02:30 -- accel/accel.sh@20 -- # IFS=: 00:07:28.076 14:02:30 -- accel/accel.sh@20 -- # read -r var val 00:07:28.076 14:02:30 -- accel/accel.sh@21 -- # val=0x1 00:07:28.076 14:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.076 14:02:30 -- accel/accel.sh@20 -- # IFS=: 00:07:28.076 14:02:30 -- accel/accel.sh@20 -- # read -r var val 00:07:28.076 14:02:30 -- accel/accel.sh@21 -- # val= 00:07:28.076 14:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.076 14:02:30 -- accel/accel.sh@20 -- # IFS=: 00:07:28.076 14:02:30 -- accel/accel.sh@20 -- # read -r var val 00:07:28.076 14:02:30 -- accel/accel.sh@21 -- # val= 00:07:28.076 14:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.076 14:02:30 -- accel/accel.sh@20 -- # IFS=: 00:07:28.076 14:02:30 -- accel/accel.sh@20 -- # read -r var val 00:07:28.076 14:02:30 -- accel/accel.sh@21 -- # val=decompress 00:07:28.076 14:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.076 14:02:30 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:28.076 14:02:30 -- accel/accel.sh@20 -- # IFS=: 00:07:28.076 14:02:30 -- accel/accel.sh@20 -- # read -r var val 00:07:28.076 14:02:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:28.076 14:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.076 14:02:30 -- accel/accel.sh@20 -- # IFS=: 00:07:28.076 14:02:30 -- accel/accel.sh@20 -- # read -r var val 00:07:28.076 14:02:30 -- accel/accel.sh@21 -- # val= 00:07:28.076 14:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.076 14:02:30 -- accel/accel.sh@20 -- # IFS=: 00:07:28.076 14:02:30 -- accel/accel.sh@20 -- # read -r var val 00:07:28.076 14:02:30 -- accel/accel.sh@21 -- # val=software 00:07:28.076 14:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.076 14:02:30 -- accel/accel.sh@23 -- # accel_module=software 00:07:28.076 14:02:30 -- accel/accel.sh@20 -- # IFS=: 00:07:28.076 14:02:30 -- accel/accel.sh@20 -- # read -r var val 00:07:28.076 14:02:30 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:28.076 14:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.076 14:02:30 -- accel/accel.sh@20 -- # IFS=: 00:07:28.076 14:02:30 -- accel/accel.sh@20 -- # read -r var val 00:07:28.076 14:02:30 -- accel/accel.sh@21 -- # val=32 00:07:28.076 14:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.076 14:02:30 -- accel/accel.sh@20 -- # IFS=: 00:07:28.076 14:02:30 -- accel/accel.sh@20 -- # read -r var val 00:07:28.076 14:02:30 -- accel/accel.sh@21 -- # val=32 00:07:28.076 14:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.076 14:02:30 -- accel/accel.sh@20 -- # IFS=: 00:07:28.076 14:02:30 -- accel/accel.sh@20 -- # read -r var val 00:07:28.076 14:02:30 -- accel/accel.sh@21 -- # val=2 00:07:28.076 14:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.076 14:02:30 -- accel/accel.sh@20 -- # IFS=: 00:07:28.076 14:02:30 -- accel/accel.sh@20 -- # read -r var val 00:07:28.076 14:02:30 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:28.076 14:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.076 14:02:30 -- accel/accel.sh@20 -- # IFS=: 00:07:28.076 14:02:30 -- accel/accel.sh@20 -- # read -r var val 00:07:28.076 14:02:30 -- accel/accel.sh@21 -- # val=Yes 00:07:28.076 14:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.076 14:02:30 -- accel/accel.sh@20 -- # IFS=: 00:07:28.076 14:02:30 -- accel/accel.sh@20 -- # read -r var val 00:07:28.076 14:02:30 -- accel/accel.sh@21 -- # val= 00:07:28.076 14:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.076 14:02:30 -- accel/accel.sh@20 -- # IFS=: 00:07:28.076 14:02:30 -- accel/accel.sh@20 -- # read -r var val 00:07:28.076 14:02:30 -- accel/accel.sh@21 -- # val= 00:07:28.076 14:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.076 14:02:30 -- accel/accel.sh@20 -- # IFS=: 00:07:28.076 14:02:30 -- accel/accel.sh@20 -- # read -r var val 00:07:29.987 14:02:32 -- accel/accel.sh@21 -- # val= 00:07:29.987 14:02:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.987 14:02:32 -- accel/accel.sh@20 -- # IFS=: 00:07:29.987 14:02:32 -- accel/accel.sh@20 -- # read -r var val 00:07:29.987 14:02:32 -- accel/accel.sh@21 -- # val= 00:07:29.987 14:02:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.987 14:02:32 -- accel/accel.sh@20 -- # IFS=: 00:07:29.987 14:02:32 -- accel/accel.sh@20 -- # read -r var val 00:07:29.987 14:02:32 -- accel/accel.sh@21 -- # val= 00:07:29.987 14:02:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.987 14:02:32 -- accel/accel.sh@20 -- # IFS=: 00:07:29.987 14:02:32 -- accel/accel.sh@20 -- # read -r var val 00:07:29.987 14:02:32 -- accel/accel.sh@21 -- # val= 00:07:29.987 14:02:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.987 14:02:32 -- accel/accel.sh@20 -- # IFS=: 00:07:29.987 14:02:32 -- accel/accel.sh@20 -- # read -r var val 00:07:29.987 14:02:32 -- accel/accel.sh@21 -- # val= 00:07:29.987 14:02:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.987 14:02:32 -- accel/accel.sh@20 -- # IFS=: 00:07:29.987 14:02:32 -- accel/accel.sh@20 -- # read -r var val 00:07:29.987 14:02:32 -- accel/accel.sh@21 -- # val= 00:07:29.987 14:02:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.987 14:02:32 -- accel/accel.sh@20 -- # IFS=: 00:07:29.987 14:02:32 -- accel/accel.sh@20 -- # read -r var val 00:07:29.987 14:02:32 -- accel/accel.sh@21 -- # val= 00:07:29.987 14:02:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.987 14:02:32 -- accel/accel.sh@20 -- # IFS=: 00:07:29.987 14:02:32 -- accel/accel.sh@20 -- # read -r var val 00:07:29.987 14:02:32 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:29.987 14:02:32 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:29.987 14:02:32 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:29.987 00:07:29.987 real 0m4.327s 00:07:29.987 user 0m3.827s 00:07:29.987 sys 0m0.286s 00:07:29.987 14:02:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:29.987 ************************************ 00:07:29.987 END TEST accel_decomp_mthread 00:07:29.987 ************************************ 00:07:29.987 14:02:32 -- common/autotest_common.sh@10 -- # set +x 00:07:29.987 14:02:32 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:29.987 14:02:32 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:29.987 14:02:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:29.987 14:02:32 -- common/autotest_common.sh@10 -- # set +x 00:07:29.987 ************************************ 00:07:29.987 START TEST accel_deomp_full_mthread 00:07:29.987 ************************************ 00:07:29.987 14:02:32 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:29.987 14:02:32 -- accel/accel.sh@16 -- # local accel_opc 00:07:29.987 14:02:32 -- accel/accel.sh@17 -- # local accel_module 00:07:29.987 14:02:32 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:29.987 14:02:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:29.987 14:02:32 -- accel/accel.sh@12 -- # build_accel_config 00:07:29.987 14:02:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:29.987 14:02:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.987 14:02:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.987 14:02:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:29.987 14:02:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:29.987 14:02:32 -- accel/accel.sh@41 -- # local IFS=, 00:07:29.987 14:02:32 -- accel/accel.sh@42 -- # jq -r . 00:07:29.987 [2024-12-08 14:02:32.519506] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:29.987 [2024-12-08 14:02:32.519658] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59810 ] 00:07:29.987 [2024-12-08 14:02:32.676198] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.248 [2024-12-08 14:02:32.968889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.163 14:02:34 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:32.163 00:07:32.163 SPDK Configuration: 00:07:32.163 Core mask: 0x1 00:07:32.163 00:07:32.163 Accel Perf Configuration: 00:07:32.163 Workload Type: decompress 00:07:32.163 Transfer size: 111250 bytes 00:07:32.163 Vector count 1 00:07:32.163 Module: software 00:07:32.163 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:32.163 Queue depth: 32 00:07:32.163 Allocate depth: 32 00:07:32.163 # threads/core: 2 00:07:32.163 Run time: 1 seconds 00:07:32.163 Verify: Yes 00:07:32.163 00:07:32.163 Running for 1 seconds... 00:07:32.163 00:07:32.163 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:32.163 ------------------------------------------------------------------------------------ 00:07:32.163 0,1 2144/s 88 MiB/s 0 0 00:07:32.163 0,0 2144/s 88 MiB/s 0 0 00:07:32.163 ==================================================================================== 00:07:32.163 Total 4288/s 454 MiB/s 0 0' 00:07:32.164 14:02:34 -- accel/accel.sh@20 -- # IFS=: 00:07:32.164 14:02:34 -- accel/accel.sh@20 -- # read -r var val 00:07:32.164 14:02:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:32.164 14:02:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:32.164 14:02:34 -- accel/accel.sh@12 -- # build_accel_config 00:07:32.164 14:02:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:32.164 14:02:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.164 14:02:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.164 14:02:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:32.164 14:02:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:32.164 14:02:34 -- accel/accel.sh@41 -- # local IFS=, 00:07:32.164 14:02:34 -- accel/accel.sh@42 -- # jq -r . 00:07:32.164 [2024-12-08 14:02:34.999741] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:32.164 [2024-12-08 14:02:35.000073] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59847 ] 00:07:32.424 [2024-12-08 14:02:35.154262] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.684 [2024-12-08 14:02:35.412702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.945 14:02:35 -- accel/accel.sh@21 -- # val= 00:07:32.945 14:02:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.945 14:02:35 -- accel/accel.sh@20 -- # IFS=: 00:07:32.945 14:02:35 -- accel/accel.sh@20 -- # read -r var val 00:07:32.945 14:02:35 -- accel/accel.sh@21 -- # val= 00:07:32.945 14:02:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.945 14:02:35 -- accel/accel.sh@20 -- # IFS=: 00:07:32.945 14:02:35 -- accel/accel.sh@20 -- # read -r var val 00:07:32.945 14:02:35 -- accel/accel.sh@21 -- # val= 00:07:32.945 14:02:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.945 14:02:35 -- accel/accel.sh@20 -- # IFS=: 00:07:32.945 14:02:35 -- accel/accel.sh@20 -- # read -r var val 00:07:32.945 14:02:35 -- accel/accel.sh@21 -- # val=0x1 00:07:32.945 14:02:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.945 14:02:35 -- accel/accel.sh@20 -- # IFS=: 00:07:32.945 14:02:35 -- accel/accel.sh@20 -- # read -r var val 00:07:32.945 14:02:35 -- accel/accel.sh@21 -- # val= 00:07:32.945 14:02:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.945 14:02:35 -- accel/accel.sh@20 -- # IFS=: 00:07:32.945 14:02:35 -- accel/accel.sh@20 -- # read -r var val 00:07:32.945 14:02:35 -- accel/accel.sh@21 -- # val= 00:07:32.945 14:02:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.945 14:02:35 -- accel/accel.sh@20 -- # IFS=: 00:07:32.945 14:02:35 -- accel/accel.sh@20 -- # read -r var val 00:07:32.945 14:02:35 -- accel/accel.sh@21 -- # val=decompress 00:07:32.945 14:02:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.945 14:02:35 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:32.945 14:02:35 -- accel/accel.sh@20 -- # IFS=: 00:07:32.945 14:02:35 -- accel/accel.sh@20 -- # read -r var val 00:07:32.945 14:02:35 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:32.945 14:02:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.945 14:02:35 -- accel/accel.sh@20 -- # IFS=: 00:07:32.945 14:02:35 -- accel/accel.sh@20 -- # read -r var val 00:07:32.945 14:02:35 -- accel/accel.sh@21 -- # val= 00:07:32.945 14:02:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.945 14:02:35 -- accel/accel.sh@20 -- # IFS=: 00:07:32.945 14:02:35 -- accel/accel.sh@20 -- # read -r var val 00:07:32.945 14:02:35 -- accel/accel.sh@21 -- # val=software 00:07:32.945 14:02:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.945 14:02:35 -- accel/accel.sh@23 -- # accel_module=software 00:07:32.945 14:02:35 -- accel/accel.sh@20 -- # IFS=: 00:07:32.945 14:02:35 -- accel/accel.sh@20 -- # read -r var val 00:07:32.945 14:02:35 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:32.945 14:02:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.945 14:02:35 -- accel/accel.sh@20 -- # IFS=: 00:07:32.945 14:02:35 -- accel/accel.sh@20 -- # read -r var val 00:07:32.945 14:02:35 -- accel/accel.sh@21 -- # val=32 00:07:32.945 14:02:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.945 14:02:35 -- accel/accel.sh@20 -- # IFS=: 00:07:32.945 14:02:35 -- accel/accel.sh@20 -- # read -r var val 00:07:32.945 14:02:35 -- accel/accel.sh@21 -- # val=32 00:07:32.945 14:02:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.945 14:02:35 -- accel/accel.sh@20 -- # IFS=: 00:07:32.945 14:02:35 -- accel/accel.sh@20 -- # read -r var val 00:07:32.945 14:02:35 -- accel/accel.sh@21 -- # val=2 00:07:32.945 14:02:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.945 14:02:35 -- accel/accel.sh@20 -- # IFS=: 00:07:32.945 14:02:35 -- accel/accel.sh@20 -- # read -r var val 00:07:32.945 14:02:35 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:32.945 14:02:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.945 14:02:35 -- accel/accel.sh@20 -- # IFS=: 00:07:32.945 14:02:35 -- accel/accel.sh@20 -- # read -r var val 00:07:32.945 14:02:35 -- accel/accel.sh@21 -- # val=Yes 00:07:32.945 14:02:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.945 14:02:35 -- accel/accel.sh@20 -- # IFS=: 00:07:32.945 14:02:35 -- accel/accel.sh@20 -- # read -r var val 00:07:32.945 14:02:35 -- accel/accel.sh@21 -- # val= 00:07:32.945 14:02:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.945 14:02:35 -- accel/accel.sh@20 -- # IFS=: 00:07:32.945 14:02:35 -- accel/accel.sh@20 -- # read -r var val 00:07:32.945 14:02:35 -- accel/accel.sh@21 -- # val= 00:07:32.945 14:02:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.945 14:02:35 -- accel/accel.sh@20 -- # IFS=: 00:07:32.945 14:02:35 -- accel/accel.sh@20 -- # read -r var val 00:07:34.858 14:02:37 -- accel/accel.sh@21 -- # val= 00:07:34.858 14:02:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.858 14:02:37 -- accel/accel.sh@20 -- # IFS=: 00:07:34.858 14:02:37 -- accel/accel.sh@20 -- # read -r var val 00:07:34.858 14:02:37 -- accel/accel.sh@21 -- # val= 00:07:34.858 14:02:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.858 14:02:37 -- accel/accel.sh@20 -- # IFS=: 00:07:34.858 14:02:37 -- accel/accel.sh@20 -- # read -r var val 00:07:34.858 14:02:37 -- accel/accel.sh@21 -- # val= 00:07:34.858 14:02:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.858 14:02:37 -- accel/accel.sh@20 -- # IFS=: 00:07:34.858 14:02:37 -- accel/accel.sh@20 -- # read -r var val 00:07:34.858 14:02:37 -- accel/accel.sh@21 -- # val= 00:07:34.858 14:02:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.858 14:02:37 -- accel/accel.sh@20 -- # IFS=: 00:07:34.858 14:02:37 -- accel/accel.sh@20 -- # read -r var val 00:07:34.858 14:02:37 -- accel/accel.sh@21 -- # val= 00:07:34.858 14:02:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.858 14:02:37 -- accel/accel.sh@20 -- # IFS=: 00:07:34.858 14:02:37 -- accel/accel.sh@20 -- # read -r var val 00:07:34.858 14:02:37 -- accel/accel.sh@21 -- # val= 00:07:34.858 14:02:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.858 14:02:37 -- accel/accel.sh@20 -- # IFS=: 00:07:34.858 14:02:37 -- accel/accel.sh@20 -- # read -r var val 00:07:34.858 14:02:37 -- accel/accel.sh@21 -- # val= 00:07:34.858 14:02:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.858 14:02:37 -- accel/accel.sh@20 -- # IFS=: 00:07:34.858 14:02:37 -- accel/accel.sh@20 -- # read -r var val 00:07:34.858 14:02:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:34.858 14:02:37 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:34.858 14:02:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:34.858 00:07:34.858 real 0m4.909s 00:07:34.858 user 0m4.233s 00:07:34.858 sys 0m0.441s 00:07:34.858 14:02:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:34.858 14:02:37 -- common/autotest_common.sh@10 -- # set +x 00:07:34.858 ************************************ 00:07:34.858 END TEST accel_deomp_full_mthread 00:07:34.858 ************************************ 00:07:34.858 14:02:37 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:34.858 14:02:37 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:34.858 14:02:37 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:34.858 14:02:37 -- accel/accel.sh@129 -- # build_accel_config 00:07:34.858 14:02:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:34.858 14:02:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.858 14:02:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:34.858 14:02:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.858 14:02:37 -- common/autotest_common.sh@10 -- # set +x 00:07:34.858 14:02:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:34.858 14:02:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:34.858 14:02:37 -- accel/accel.sh@41 -- # local IFS=, 00:07:34.858 14:02:37 -- accel/accel.sh@42 -- # jq -r . 00:07:34.858 ************************************ 00:07:34.858 START TEST accel_dif_functional_tests 00:07:34.859 ************************************ 00:07:34.859 14:02:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:34.859 [2024-12-08 14:02:37.516124] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:34.859 [2024-12-08 14:02:37.516244] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59889 ] 00:07:34.859 [2024-12-08 14:02:37.666433] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:35.120 [2024-12-08 14:02:37.863242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.120 [2024-12-08 14:02:37.863506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.120 [2024-12-08 14:02:37.863519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:35.381 00:07:35.382 00:07:35.382 CUnit - A unit testing framework for C - Version 2.1-3 00:07:35.382 http://cunit.sourceforge.net/ 00:07:35.382 00:07:35.382 00:07:35.382 Suite: accel_dif 00:07:35.382 Test: verify: DIF generated, GUARD check ...passed 00:07:35.382 Test: verify: DIF generated, APPTAG check ...passed 00:07:35.382 Test: verify: DIF generated, REFTAG check ...passed 00:07:35.382 Test: verify: DIF not generated, GUARD check ...[2024-12-08 14:02:38.060374] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:35.382 passed 00:07:35.382 Test: verify: DIF not generated, APPTAG check ...passed 00:07:35.382 Test: verify: DIF not generated, REFTAG check ...passed 00:07:35.382 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:35.382 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:07:35.382 Test: verify: APPTAG incorrect, no APPTAG check ...[2024-12-08 14:02:38.060551] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:35.382 [2024-12-08 14:02:38.060621] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:35.382 [2024-12-08 14:02:38.060653] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:35.382 [2024-12-08 14:02:38.060680] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:35.382 [2024-12-08 14:02:38.060704] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:35.382 [2024-12-08 14:02:38.060770] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:35.382 passed 00:07:35.382 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:35.382 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:35.382 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:07:35.382 Test: generate copy: DIF generated, GUARD check ...passed 00:07:35.382 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:35.382 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:35.382 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:35.382 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:35.382 Test: generate copy: DIF generated, no REFTAG check flag set ...[2024-12-08 14:02:38.060946] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:35.382 passed 00:07:35.382 Test: generate copy: iovecs-len validate ...passed 00:07:35.382 Test: generate copy: buffer alignment validate ...passed 00:07:35.382 00:07:35.382 Run Summary: Type Total Ran Passed Failed Inactive 00:07:35.382 suites 1 1 n/a 0 0 00:07:35.382 tests 20 20 20 0 0 00:07:35.382 asserts 204 204 204 0 n/a 00:07:35.382 00:07:35.382 Elapsed time = 0.003 seconds 00:07:35.382 [2024-12-08 14:02:38.061380] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:35.953 00:07:35.953 real 0m1.276s 00:07:35.953 user 0m2.231s 00:07:35.953 sys 0m0.188s 00:07:35.953 14:02:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:35.953 ************************************ 00:07:35.953 END TEST accel_dif_functional_tests 00:07:35.953 ************************************ 00:07:35.953 14:02:38 -- common/autotest_common.sh@10 -- # set +x 00:07:35.953 00:07:35.953 real 1m31.330s 00:07:35.953 user 1m38.924s 00:07:35.953 sys 0m7.490s 00:07:35.953 14:02:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:35.953 14:02:38 -- common/autotest_common.sh@10 -- # set +x 00:07:35.953 ************************************ 00:07:35.953 END TEST accel 00:07:35.953 ************************************ 00:07:35.953 14:02:38 -- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:35.953 14:02:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:35.953 14:02:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:35.953 14:02:38 -- common/autotest_common.sh@10 -- # set +x 00:07:35.953 ************************************ 00:07:35.953 START TEST accel_rpc 00:07:35.953 ************************************ 00:07:35.953 14:02:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:36.216 * Looking for test storage... 00:07:36.216 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:36.216 14:02:38 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:36.216 14:02:38 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:36.216 14:02:38 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:36.216 14:02:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:36.216 14:02:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:36.216 14:02:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:36.216 14:02:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:36.216 14:02:38 -- scripts/common.sh@335 -- # IFS=.-: 00:07:36.216 14:02:38 -- scripts/common.sh@335 -- # read -ra ver1 00:07:36.216 14:02:38 -- scripts/common.sh@336 -- # IFS=.-: 00:07:36.216 14:02:38 -- scripts/common.sh@336 -- # read -ra ver2 00:07:36.216 14:02:38 -- scripts/common.sh@337 -- # local 'op=<' 00:07:36.216 14:02:38 -- scripts/common.sh@339 -- # ver1_l=2 00:07:36.216 14:02:38 -- scripts/common.sh@340 -- # ver2_l=1 00:07:36.216 14:02:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:36.216 14:02:38 -- scripts/common.sh@343 -- # case "$op" in 00:07:36.216 14:02:38 -- scripts/common.sh@344 -- # : 1 00:07:36.216 14:02:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:36.216 14:02:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:36.216 14:02:38 -- scripts/common.sh@364 -- # decimal 1 00:07:36.216 14:02:38 -- scripts/common.sh@352 -- # local d=1 00:07:36.216 14:02:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:36.216 14:02:38 -- scripts/common.sh@354 -- # echo 1 00:07:36.216 14:02:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:36.216 14:02:38 -- scripts/common.sh@365 -- # decimal 2 00:07:36.216 14:02:38 -- scripts/common.sh@352 -- # local d=2 00:07:36.216 14:02:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:36.216 14:02:38 -- scripts/common.sh@354 -- # echo 2 00:07:36.216 14:02:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:36.216 14:02:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:36.216 14:02:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:36.216 14:02:38 -- scripts/common.sh@367 -- # return 0 00:07:36.216 14:02:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:36.216 14:02:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:36.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.216 --rc genhtml_branch_coverage=1 00:07:36.216 --rc genhtml_function_coverage=1 00:07:36.216 --rc genhtml_legend=1 00:07:36.216 --rc geninfo_all_blocks=1 00:07:36.216 --rc geninfo_unexecuted_blocks=1 00:07:36.216 00:07:36.216 ' 00:07:36.216 14:02:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:36.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.216 --rc genhtml_branch_coverage=1 00:07:36.216 --rc genhtml_function_coverage=1 00:07:36.216 --rc genhtml_legend=1 00:07:36.216 --rc geninfo_all_blocks=1 00:07:36.216 --rc geninfo_unexecuted_blocks=1 00:07:36.216 00:07:36.216 ' 00:07:36.216 14:02:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:36.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.216 --rc genhtml_branch_coverage=1 00:07:36.216 --rc genhtml_function_coverage=1 00:07:36.216 --rc genhtml_legend=1 00:07:36.216 --rc geninfo_all_blocks=1 00:07:36.216 --rc geninfo_unexecuted_blocks=1 00:07:36.216 00:07:36.216 ' 00:07:36.216 14:02:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:36.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.216 --rc genhtml_branch_coverage=1 00:07:36.216 --rc genhtml_function_coverage=1 00:07:36.216 --rc genhtml_legend=1 00:07:36.216 --rc geninfo_all_blocks=1 00:07:36.216 --rc geninfo_unexecuted_blocks=1 00:07:36.216 00:07:36.216 ' 00:07:36.216 14:02:38 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:36.216 14:02:38 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=59978 00:07:36.216 14:02:38 -- accel/accel_rpc.sh@15 -- # waitforlisten 59978 00:07:36.216 14:02:38 -- common/autotest_common.sh@829 -- # '[' -z 59978 ']' 00:07:36.216 14:02:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.216 14:02:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:36.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.216 14:02:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.216 14:02:38 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:36.216 14:02:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:36.216 14:02:38 -- common/autotest_common.sh@10 -- # set +x 00:07:36.216 [2024-12-08 14:02:39.070948] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:36.216 [2024-12-08 14:02:39.071112] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59978 ] 00:07:36.477 [2024-12-08 14:02:39.224198] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.738 [2024-12-08 14:02:39.431895] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:36.738 [2024-12-08 14:02:39.432103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.000 14:02:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:37.000 14:02:39 -- common/autotest_common.sh@862 -- # return 0 00:07:37.000 14:02:39 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:37.000 14:02:39 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:37.000 14:02:39 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:37.000 14:02:39 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:37.000 14:02:39 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:37.000 14:02:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:37.000 14:02:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:37.000 14:02:39 -- common/autotest_common.sh@10 -- # set +x 00:07:37.000 ************************************ 00:07:37.000 START TEST accel_assign_opcode 00:07:37.000 ************************************ 00:07:37.000 14:02:39 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:07:37.000 14:02:39 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:37.000 14:02:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.000 14:02:39 -- common/autotest_common.sh@10 -- # set +x 00:07:37.000 [2024-12-08 14:02:39.896720] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:37.000 14:02:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.000 14:02:39 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:37.000 14:02:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.000 14:02:39 -- common/autotest_common.sh@10 -- # set +x 00:07:37.000 [2024-12-08 14:02:39.904673] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:37.000 14:02:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.000 14:02:39 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:37.000 14:02:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.000 14:02:39 -- common/autotest_common.sh@10 -- # set +x 00:07:37.571 14:02:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.571 14:02:40 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:37.571 14:02:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.571 14:02:40 -- accel/accel_rpc.sh@42 -- # grep software 00:07:37.571 14:02:40 -- common/autotest_common.sh@10 -- # set +x 00:07:37.571 14:02:40 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:37.571 14:02:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.571 software 00:07:37.571 00:07:37.571 real 0m0.539s 00:07:37.571 user 0m0.036s 00:07:37.571 sys 0m0.009s 00:07:37.571 14:02:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:37.571 ************************************ 00:07:37.571 END TEST accel_assign_opcode 00:07:37.571 ************************************ 00:07:37.571 14:02:40 -- common/autotest_common.sh@10 -- # set +x 00:07:37.572 14:02:40 -- accel/accel_rpc.sh@55 -- # killprocess 59978 00:07:37.572 14:02:40 -- common/autotest_common.sh@936 -- # '[' -z 59978 ']' 00:07:37.572 14:02:40 -- common/autotest_common.sh@940 -- # kill -0 59978 00:07:37.572 14:02:40 -- common/autotest_common.sh@941 -- # uname 00:07:37.572 14:02:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:37.572 14:02:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 59978 00:07:37.832 14:02:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:37.832 14:02:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:37.832 killing process with pid 59978 00:07:37.832 14:02:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 59978' 00:07:37.832 14:02:40 -- common/autotest_common.sh@955 -- # kill 59978 00:07:37.832 14:02:40 -- common/autotest_common.sh@960 -- # wait 59978 00:07:39.218 00:07:39.218 real 0m2.936s 00:07:39.218 user 0m2.820s 00:07:39.218 sys 0m0.479s 00:07:39.218 14:02:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:39.218 ************************************ 00:07:39.218 END TEST accel_rpc 00:07:39.218 ************************************ 00:07:39.218 14:02:41 -- common/autotest_common.sh@10 -- # set +x 00:07:39.218 14:02:41 -- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:39.218 14:02:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:39.218 14:02:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:39.218 14:02:41 -- common/autotest_common.sh@10 -- # set +x 00:07:39.218 ************************************ 00:07:39.218 START TEST app_cmdline 00:07:39.218 ************************************ 00:07:39.218 14:02:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:39.218 * Looking for test storage... 00:07:39.218 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:39.218 14:02:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:39.218 14:02:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:39.218 14:02:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:39.218 14:02:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:39.218 14:02:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:39.218 14:02:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:39.218 14:02:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:39.218 14:02:41 -- scripts/common.sh@335 -- # IFS=.-: 00:07:39.218 14:02:41 -- scripts/common.sh@335 -- # read -ra ver1 00:07:39.218 14:02:41 -- scripts/common.sh@336 -- # IFS=.-: 00:07:39.218 14:02:41 -- scripts/common.sh@336 -- # read -ra ver2 00:07:39.218 14:02:41 -- scripts/common.sh@337 -- # local 'op=<' 00:07:39.218 14:02:41 -- scripts/common.sh@339 -- # ver1_l=2 00:07:39.218 14:02:41 -- scripts/common.sh@340 -- # ver2_l=1 00:07:39.218 14:02:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:39.218 14:02:41 -- scripts/common.sh@343 -- # case "$op" in 00:07:39.218 14:02:41 -- scripts/common.sh@344 -- # : 1 00:07:39.218 14:02:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:39.218 14:02:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:39.218 14:02:41 -- scripts/common.sh@364 -- # decimal 1 00:07:39.218 14:02:41 -- scripts/common.sh@352 -- # local d=1 00:07:39.218 14:02:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:39.218 14:02:41 -- scripts/common.sh@354 -- # echo 1 00:07:39.218 14:02:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:39.218 14:02:41 -- scripts/common.sh@365 -- # decimal 2 00:07:39.218 14:02:41 -- scripts/common.sh@352 -- # local d=2 00:07:39.218 14:02:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:39.218 14:02:41 -- scripts/common.sh@354 -- # echo 2 00:07:39.218 14:02:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:39.218 14:02:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:39.218 14:02:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:39.218 14:02:41 -- scripts/common.sh@367 -- # return 0 00:07:39.218 14:02:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:39.218 14:02:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:39.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.218 --rc genhtml_branch_coverage=1 00:07:39.218 --rc genhtml_function_coverage=1 00:07:39.218 --rc genhtml_legend=1 00:07:39.218 --rc geninfo_all_blocks=1 00:07:39.218 --rc geninfo_unexecuted_blocks=1 00:07:39.218 00:07:39.218 ' 00:07:39.218 14:02:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:39.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.218 --rc genhtml_branch_coverage=1 00:07:39.218 --rc genhtml_function_coverage=1 00:07:39.218 --rc genhtml_legend=1 00:07:39.218 --rc geninfo_all_blocks=1 00:07:39.218 --rc geninfo_unexecuted_blocks=1 00:07:39.218 00:07:39.218 ' 00:07:39.218 14:02:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:39.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.218 --rc genhtml_branch_coverage=1 00:07:39.218 --rc genhtml_function_coverage=1 00:07:39.218 --rc genhtml_legend=1 00:07:39.218 --rc geninfo_all_blocks=1 00:07:39.218 --rc geninfo_unexecuted_blocks=1 00:07:39.218 00:07:39.218 ' 00:07:39.218 14:02:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:39.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.218 --rc genhtml_branch_coverage=1 00:07:39.218 --rc genhtml_function_coverage=1 00:07:39.218 --rc genhtml_legend=1 00:07:39.218 --rc geninfo_all_blocks=1 00:07:39.218 --rc geninfo_unexecuted_blocks=1 00:07:39.218 00:07:39.218 ' 00:07:39.218 14:02:41 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:39.218 14:02:41 -- app/cmdline.sh@17 -- # spdk_tgt_pid=60085 00:07:39.218 14:02:41 -- app/cmdline.sh@18 -- # waitforlisten 60085 00:07:39.218 14:02:41 -- common/autotest_common.sh@829 -- # '[' -z 60085 ']' 00:07:39.218 14:02:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.218 14:02:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:39.218 14:02:41 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:39.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.218 14:02:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.218 14:02:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:39.218 14:02:41 -- common/autotest_common.sh@10 -- # set +x 00:07:39.218 [2024-12-08 14:02:42.081747] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:39.218 [2024-12-08 14:02:42.081889] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60085 ] 00:07:39.478 [2024-12-08 14:02:42.232125] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.739 [2024-12-08 14:02:42.500011] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:39.739 [2024-12-08 14:02:42.500278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.150 14:02:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:41.150 14:02:43 -- common/autotest_common.sh@862 -- # return 0 00:07:41.150 14:02:43 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:41.150 { 00:07:41.150 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e", 00:07:41.150 "fields": { 00:07:41.150 "major": 24, 00:07:41.150 "minor": 1, 00:07:41.150 "patch": 1, 00:07:41.150 "suffix": "-pre", 00:07:41.150 "commit": "c13c99a5e" 00:07:41.150 } 00:07:41.150 } 00:07:41.150 14:02:43 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:41.150 14:02:43 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:41.150 14:02:43 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:41.150 14:02:43 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:41.150 14:02:43 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:41.150 14:02:43 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:41.150 14:02:43 -- app/cmdline.sh@26 -- # sort 00:07:41.150 14:02:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.150 14:02:43 -- common/autotest_common.sh@10 -- # set +x 00:07:41.150 14:02:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.150 14:02:43 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:41.150 14:02:43 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:41.150 14:02:43 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:41.150 14:02:43 -- common/autotest_common.sh@650 -- # local es=0 00:07:41.150 14:02:43 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:41.150 14:02:43 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:41.150 14:02:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:41.150 14:02:43 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:41.150 14:02:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:41.150 14:02:43 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:41.150 14:02:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:41.150 14:02:43 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:41.150 14:02:43 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:41.150 14:02:43 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:41.414 request: 00:07:41.414 { 00:07:41.414 "method": "env_dpdk_get_mem_stats", 00:07:41.414 "req_id": 1 00:07:41.414 } 00:07:41.414 Got JSON-RPC error response 00:07:41.414 response: 00:07:41.414 { 00:07:41.414 "code": -32601, 00:07:41.414 "message": "Method not found" 00:07:41.414 } 00:07:41.414 14:02:44 -- common/autotest_common.sh@653 -- # es=1 00:07:41.414 14:02:44 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:41.414 14:02:44 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:41.414 14:02:44 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:41.414 14:02:44 -- app/cmdline.sh@1 -- # killprocess 60085 00:07:41.414 14:02:44 -- common/autotest_common.sh@936 -- # '[' -z 60085 ']' 00:07:41.414 14:02:44 -- common/autotest_common.sh@940 -- # kill -0 60085 00:07:41.414 14:02:44 -- common/autotest_common.sh@941 -- # uname 00:07:41.414 14:02:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:41.414 14:02:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60085 00:07:41.414 14:02:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:41.414 14:02:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:41.414 14:02:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60085' 00:07:41.414 killing process with pid 60085 00:07:41.414 14:02:44 -- common/autotest_common.sh@955 -- # kill 60085 00:07:41.414 14:02:44 -- common/autotest_common.sh@960 -- # wait 60085 00:07:42.810 00:07:42.810 real 0m3.618s 00:07:42.810 user 0m3.891s 00:07:42.810 sys 0m0.670s 00:07:42.810 14:02:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:42.810 ************************************ 00:07:42.810 14:02:45 -- common/autotest_common.sh@10 -- # set +x 00:07:42.810 END TEST app_cmdline 00:07:42.810 ************************************ 00:07:42.810 14:02:45 -- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:42.810 14:02:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:42.810 14:02:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:42.810 14:02:45 -- common/autotest_common.sh@10 -- # set +x 00:07:42.810 ************************************ 00:07:42.810 START TEST version 00:07:42.810 ************************************ 00:07:42.810 14:02:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:42.810 * Looking for test storage... 00:07:42.810 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:42.810 14:02:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:42.810 14:02:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:42.810 14:02:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:42.810 14:02:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:42.810 14:02:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:42.810 14:02:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:42.810 14:02:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:42.810 14:02:45 -- scripts/common.sh@335 -- # IFS=.-: 00:07:42.810 14:02:45 -- scripts/common.sh@335 -- # read -ra ver1 00:07:42.810 14:02:45 -- scripts/common.sh@336 -- # IFS=.-: 00:07:42.810 14:02:45 -- scripts/common.sh@336 -- # read -ra ver2 00:07:42.810 14:02:45 -- scripts/common.sh@337 -- # local 'op=<' 00:07:42.810 14:02:45 -- scripts/common.sh@339 -- # ver1_l=2 00:07:42.810 14:02:45 -- scripts/common.sh@340 -- # ver2_l=1 00:07:42.810 14:02:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:42.810 14:02:45 -- scripts/common.sh@343 -- # case "$op" in 00:07:42.810 14:02:45 -- scripts/common.sh@344 -- # : 1 00:07:42.810 14:02:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:42.810 14:02:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:42.810 14:02:45 -- scripts/common.sh@364 -- # decimal 1 00:07:42.810 14:02:45 -- scripts/common.sh@352 -- # local d=1 00:07:42.810 14:02:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:42.810 14:02:45 -- scripts/common.sh@354 -- # echo 1 00:07:42.810 14:02:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:42.810 14:02:45 -- scripts/common.sh@365 -- # decimal 2 00:07:42.810 14:02:45 -- scripts/common.sh@352 -- # local d=2 00:07:42.810 14:02:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:42.810 14:02:45 -- scripts/common.sh@354 -- # echo 2 00:07:42.810 14:02:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:42.810 14:02:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:42.810 14:02:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:42.810 14:02:45 -- scripts/common.sh@367 -- # return 0 00:07:42.810 14:02:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:42.810 14:02:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:42.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.810 --rc genhtml_branch_coverage=1 00:07:42.810 --rc genhtml_function_coverage=1 00:07:42.810 --rc genhtml_legend=1 00:07:42.810 --rc geninfo_all_blocks=1 00:07:42.810 --rc geninfo_unexecuted_blocks=1 00:07:42.810 00:07:42.810 ' 00:07:42.810 14:02:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:42.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.810 --rc genhtml_branch_coverage=1 00:07:42.810 --rc genhtml_function_coverage=1 00:07:42.810 --rc genhtml_legend=1 00:07:42.810 --rc geninfo_all_blocks=1 00:07:42.810 --rc geninfo_unexecuted_blocks=1 00:07:42.810 00:07:42.810 ' 00:07:42.810 14:02:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:42.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.810 --rc genhtml_branch_coverage=1 00:07:42.810 --rc genhtml_function_coverage=1 00:07:42.810 --rc genhtml_legend=1 00:07:42.810 --rc geninfo_all_blocks=1 00:07:42.810 --rc geninfo_unexecuted_blocks=1 00:07:42.810 00:07:42.810 ' 00:07:42.810 14:02:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:42.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.810 --rc genhtml_branch_coverage=1 00:07:42.810 --rc genhtml_function_coverage=1 00:07:42.810 --rc genhtml_legend=1 00:07:42.810 --rc geninfo_all_blocks=1 00:07:42.810 --rc geninfo_unexecuted_blocks=1 00:07:42.810 00:07:42.810 ' 00:07:42.810 14:02:45 -- app/version.sh@17 -- # get_header_version major 00:07:42.810 14:02:45 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:42.810 14:02:45 -- app/version.sh@14 -- # tr -d '"' 00:07:42.810 14:02:45 -- app/version.sh@14 -- # cut -f2 00:07:42.810 14:02:45 -- app/version.sh@17 -- # major=24 00:07:42.810 14:02:45 -- app/version.sh@18 -- # get_header_version minor 00:07:42.810 14:02:45 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:42.810 14:02:45 -- app/version.sh@14 -- # cut -f2 00:07:42.810 14:02:45 -- app/version.sh@14 -- # tr -d '"' 00:07:42.810 14:02:45 -- app/version.sh@18 -- # minor=1 00:07:42.810 14:02:45 -- app/version.sh@19 -- # get_header_version patch 00:07:42.810 14:02:45 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:42.810 14:02:45 -- app/version.sh@14 -- # cut -f2 00:07:42.810 14:02:45 -- app/version.sh@14 -- # tr -d '"' 00:07:42.810 14:02:45 -- app/version.sh@19 -- # patch=1 00:07:42.810 14:02:45 -- app/version.sh@20 -- # get_header_version suffix 00:07:42.810 14:02:45 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:42.810 14:02:45 -- app/version.sh@14 -- # cut -f2 00:07:42.810 14:02:45 -- app/version.sh@14 -- # tr -d '"' 00:07:42.810 14:02:45 -- app/version.sh@20 -- # suffix=-pre 00:07:42.810 14:02:45 -- app/version.sh@22 -- # version=24.1 00:07:42.810 14:02:45 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:42.810 14:02:45 -- app/version.sh@25 -- # version=24.1.1 00:07:42.810 14:02:45 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:42.810 14:02:45 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:42.810 14:02:45 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:43.072 14:02:45 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:43.072 14:02:45 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:43.072 00:07:43.072 real 0m0.207s 00:07:43.072 user 0m0.125s 00:07:43.072 sys 0m0.112s 00:07:43.072 14:02:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:43.072 14:02:45 -- common/autotest_common.sh@10 -- # set +x 00:07:43.072 ************************************ 00:07:43.072 END TEST version 00:07:43.072 ************************************ 00:07:43.072 14:02:45 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:07:43.072 14:02:45 -- spdk/autotest.sh@191 -- # uname -s 00:07:43.072 14:02:45 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:07:43.072 14:02:45 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:43.072 14:02:45 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:43.072 14:02:45 -- spdk/autotest.sh@204 -- # '[' 1 -eq 1 ']' 00:07:43.072 14:02:45 -- spdk/autotest.sh@205 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:43.072 14:02:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:43.072 14:02:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:43.072 14:02:45 -- common/autotest_common.sh@10 -- # set +x 00:07:43.072 ************************************ 00:07:43.072 START TEST blockdev_nvme 00:07:43.072 ************************************ 00:07:43.072 14:02:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:43.072 * Looking for test storage... 00:07:43.072 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:43.072 14:02:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:43.072 14:02:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:43.072 14:02:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:43.072 14:02:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:43.072 14:02:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:43.072 14:02:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:43.072 14:02:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:43.072 14:02:45 -- scripts/common.sh@335 -- # IFS=.-: 00:07:43.072 14:02:45 -- scripts/common.sh@335 -- # read -ra ver1 00:07:43.072 14:02:45 -- scripts/common.sh@336 -- # IFS=.-: 00:07:43.072 14:02:45 -- scripts/common.sh@336 -- # read -ra ver2 00:07:43.072 14:02:45 -- scripts/common.sh@337 -- # local 'op=<' 00:07:43.072 14:02:45 -- scripts/common.sh@339 -- # ver1_l=2 00:07:43.072 14:02:45 -- scripts/common.sh@340 -- # ver2_l=1 00:07:43.072 14:02:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:43.072 14:02:45 -- scripts/common.sh@343 -- # case "$op" in 00:07:43.072 14:02:45 -- scripts/common.sh@344 -- # : 1 00:07:43.072 14:02:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:43.072 14:02:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:43.072 14:02:45 -- scripts/common.sh@364 -- # decimal 1 00:07:43.072 14:02:45 -- scripts/common.sh@352 -- # local d=1 00:07:43.072 14:02:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:43.072 14:02:45 -- scripts/common.sh@354 -- # echo 1 00:07:43.072 14:02:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:43.072 14:02:45 -- scripts/common.sh@365 -- # decimal 2 00:07:43.072 14:02:45 -- scripts/common.sh@352 -- # local d=2 00:07:43.072 14:02:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:43.072 14:02:45 -- scripts/common.sh@354 -- # echo 2 00:07:43.072 14:02:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:43.072 14:02:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:43.072 14:02:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:43.072 14:02:45 -- scripts/common.sh@367 -- # return 0 00:07:43.072 14:02:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:43.072 14:02:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:43.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.072 --rc genhtml_branch_coverage=1 00:07:43.072 --rc genhtml_function_coverage=1 00:07:43.072 --rc genhtml_legend=1 00:07:43.072 --rc geninfo_all_blocks=1 00:07:43.072 --rc geninfo_unexecuted_blocks=1 00:07:43.072 00:07:43.072 ' 00:07:43.072 14:02:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:43.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.072 --rc genhtml_branch_coverage=1 00:07:43.072 --rc genhtml_function_coverage=1 00:07:43.072 --rc genhtml_legend=1 00:07:43.072 --rc geninfo_all_blocks=1 00:07:43.072 --rc geninfo_unexecuted_blocks=1 00:07:43.072 00:07:43.072 ' 00:07:43.072 14:02:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:43.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.072 --rc genhtml_branch_coverage=1 00:07:43.072 --rc genhtml_function_coverage=1 00:07:43.072 --rc genhtml_legend=1 00:07:43.073 --rc geninfo_all_blocks=1 00:07:43.073 --rc geninfo_unexecuted_blocks=1 00:07:43.073 00:07:43.073 ' 00:07:43.073 14:02:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:43.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.073 --rc genhtml_branch_coverage=1 00:07:43.073 --rc genhtml_function_coverage=1 00:07:43.073 --rc genhtml_legend=1 00:07:43.073 --rc geninfo_all_blocks=1 00:07:43.073 --rc geninfo_unexecuted_blocks=1 00:07:43.073 00:07:43.073 ' 00:07:43.073 14:02:45 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:43.073 14:02:45 -- bdev/nbd_common.sh@6 -- # set -e 00:07:43.073 14:02:45 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:43.073 14:02:45 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:43.073 14:02:45 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:43.073 14:02:45 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:43.073 14:02:45 -- bdev/blockdev.sh@18 -- # : 00:07:43.073 14:02:45 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:07:43.073 14:02:45 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:07:43.073 14:02:45 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:07:43.073 14:02:45 -- bdev/blockdev.sh@672 -- # uname -s 00:07:43.073 14:02:45 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:07:43.073 14:02:45 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:07:43.073 14:02:45 -- bdev/blockdev.sh@680 -- # test_type=nvme 00:07:43.073 14:02:45 -- bdev/blockdev.sh@681 -- # crypto_device= 00:07:43.073 14:02:45 -- bdev/blockdev.sh@682 -- # dek= 00:07:43.073 14:02:45 -- bdev/blockdev.sh@683 -- # env_ctx= 00:07:43.073 14:02:45 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:07:43.073 14:02:45 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:07:43.073 14:02:45 -- bdev/blockdev.sh@688 -- # [[ nvme == bdev ]] 00:07:43.073 14:02:45 -- bdev/blockdev.sh@688 -- # [[ nvme == crypto_* ]] 00:07:43.073 14:02:45 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:07:43.073 14:02:45 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=60268 00:07:43.073 14:02:45 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:43.073 14:02:45 -- bdev/blockdev.sh@47 -- # waitforlisten 60268 00:07:43.073 14:02:45 -- common/autotest_common.sh@829 -- # '[' -z 60268 ']' 00:07:43.073 14:02:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.073 14:02:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:43.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.073 14:02:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.073 14:02:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:43.073 14:02:45 -- common/autotest_common.sh@10 -- # set +x 00:07:43.073 14:02:45 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:43.335 [2024-12-08 14:02:46.056874] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:43.335 [2024-12-08 14:02:46.057052] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60268 ] 00:07:43.335 [2024-12-08 14:02:46.211529] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.595 [2024-12-08 14:02:46.398726] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:43.595 [2024-12-08 14:02:46.398909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.981 14:02:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:44.981 14:02:47 -- common/autotest_common.sh@862 -- # return 0 00:07:44.981 14:02:47 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:07:44.981 14:02:47 -- bdev/blockdev.sh@697 -- # setup_nvme_conf 00:07:44.981 14:02:47 -- bdev/blockdev.sh@79 -- # local json 00:07:44.981 14:02:47 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:07:44.981 14:02:47 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:44.981 14:02:47 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:07.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:08.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:09.0" } } ] }'\''' 00:07:44.981 14:02:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.982 14:02:47 -- common/autotest_common.sh@10 -- # set +x 00:07:44.982 14:02:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.982 14:02:47 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:07:44.982 14:02:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.982 14:02:47 -- common/autotest_common.sh@10 -- # set +x 00:07:44.982 14:02:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.982 14:02:47 -- bdev/blockdev.sh@738 -- # cat 00:07:44.982 14:02:47 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:07:44.982 14:02:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.982 14:02:47 -- common/autotest_common.sh@10 -- # set +x 00:07:44.982 14:02:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.982 14:02:47 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:07:44.982 14:02:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.982 14:02:47 -- common/autotest_common.sh@10 -- # set +x 00:07:45.243 14:02:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.243 14:02:47 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:45.243 14:02:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.243 14:02:47 -- common/autotest_common.sh@10 -- # set +x 00:07:45.243 14:02:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.243 14:02:47 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:07:45.243 14:02:47 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:07:45.243 14:02:47 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:07:45.243 14:02:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.243 14:02:47 -- common/autotest_common.sh@10 -- # set +x 00:07:45.243 14:02:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.243 14:02:47 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:07:45.243 14:02:47 -- bdev/blockdev.sh@747 -- # jq -r .name 00:07:45.244 14:02:47 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "3768254d-0a3e-41ae-8edd-cbfa4916e5a9"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "3768254d-0a3e-41ae-8edd-cbfa4916e5a9",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:06.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:06.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "ba8c58f0-a21c-4838-b865-ea9050716d15"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "ba8c58f0-a21c-4838-b865-ea9050716d15",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:07.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:07.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "aa9c9eda-0c5a-486d-aea5-ecabdb2b6ff4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "aa9c9eda-0c5a-486d-aea5-ecabdb2b6ff4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "ad03b503-21a7-4b6b-8f52-b7bd2995e61e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ad03b503-21a7-4b6b-8f52-b7bd2995e61e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "c9748dd8-d448-4a83-a7aa-5b06a28b5d90"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c9748dd8-d448-4a83-a7aa-5b06a28b5d90",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "d9891d31-d69a-48c7-b525-3f6e9ec51d21"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "d9891d31-d69a-48c7-b525-3f6e9ec51d21",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:09.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:09.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:45.244 14:02:48 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:07:45.244 14:02:48 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1 00:07:45.244 14:02:48 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:07:45.244 14:02:48 -- bdev/blockdev.sh@752 -- # killprocess 60268 00:07:45.244 14:02:48 -- common/autotest_common.sh@936 -- # '[' -z 60268 ']' 00:07:45.244 14:02:48 -- common/autotest_common.sh@940 -- # kill -0 60268 00:07:45.244 14:02:48 -- common/autotest_common.sh@941 -- # uname 00:07:45.244 14:02:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:45.244 14:02:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60268 00:07:45.244 14:02:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:45.244 14:02:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:45.244 killing process with pid 60268 00:07:45.244 14:02:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60268' 00:07:45.244 14:02:48 -- common/autotest_common.sh@955 -- # kill 60268 00:07:45.244 14:02:48 -- common/autotest_common.sh@960 -- # wait 60268 00:07:46.629 14:02:49 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:46.629 14:02:49 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:46.629 14:02:49 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:46.629 14:02:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:46.629 14:02:49 -- common/autotest_common.sh@10 -- # set +x 00:07:46.629 ************************************ 00:07:46.629 START TEST bdev_hello_world 00:07:46.629 ************************************ 00:07:46.629 14:02:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:46.629 [2024-12-08 14:02:49.410122] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:46.629 [2024-12-08 14:02:49.410263] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60354 ] 00:07:46.890 [2024-12-08 14:02:49.557157] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.890 [2024-12-08 14:02:49.732424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.462 [2024-12-08 14:02:50.226182] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:47.462 [2024-12-08 14:02:50.226232] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:47.462 [2024-12-08 14:02:50.226248] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:47.462 [2024-12-08 14:02:50.228259] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:47.462 [2024-12-08 14:02:50.229349] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:47.462 [2024-12-08 14:02:50.229377] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:47.462 [2024-12-08 14:02:50.229802] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:47.462 00:07:47.462 [2024-12-08 14:02:50.229829] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:48.034 00:07:48.034 real 0m1.563s 00:07:48.034 user 0m1.259s 00:07:48.034 sys 0m0.198s 00:07:48.034 14:02:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:48.034 14:02:50 -- common/autotest_common.sh@10 -- # set +x 00:07:48.034 ************************************ 00:07:48.034 END TEST bdev_hello_world 00:07:48.034 ************************************ 00:07:48.302 14:02:50 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:07:48.302 14:02:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:48.302 14:02:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:48.302 14:02:50 -- common/autotest_common.sh@10 -- # set +x 00:07:48.302 ************************************ 00:07:48.302 START TEST bdev_bounds 00:07:48.302 ************************************ 00:07:48.302 14:02:50 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:07:48.302 14:02:50 -- bdev/blockdev.sh@288 -- # bdevio_pid=60390 00:07:48.302 14:02:50 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:48.302 Process bdevio pid: 60390 00:07:48.302 14:02:50 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 60390' 00:07:48.303 14:02:50 -- bdev/blockdev.sh@291 -- # waitforlisten 60390 00:07:48.303 14:02:50 -- common/autotest_common.sh@829 -- # '[' -z 60390 ']' 00:07:48.303 14:02:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.303 14:02:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:48.303 14:02:50 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:48.303 14:02:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.303 14:02:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:48.303 14:02:50 -- common/autotest_common.sh@10 -- # set +x 00:07:48.303 [2024-12-08 14:02:51.033137] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:48.303 [2024-12-08 14:02:51.033253] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60390 ] 00:07:48.303 [2024-12-08 14:02:51.181494] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:48.577 [2024-12-08 14:02:51.374119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.577 [2024-12-08 14:02:51.374263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:48.577 [2024-12-08 14:02:51.374345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.961 14:02:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:49.961 14:02:52 -- common/autotest_common.sh@862 -- # return 0 00:07:49.961 14:02:52 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:49.961 I/O targets: 00:07:49.961 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:49.961 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:07:49.961 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:49.961 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:49.961 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:49.961 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:49.961 00:07:49.961 00:07:49.961 CUnit - A unit testing framework for C - Version 2.1-3 00:07:49.961 http://cunit.sourceforge.net/ 00:07:49.961 00:07:49.961 00:07:49.961 Suite: bdevio tests on: Nvme3n1 00:07:49.961 Test: blockdev write read block ...passed 00:07:49.961 Test: blockdev write zeroes read block ...passed 00:07:49.961 Test: blockdev write zeroes read no split ...passed 00:07:49.961 Test: blockdev write zeroes read split ...passed 00:07:49.961 Test: blockdev write zeroes read split partial ...passed 00:07:49.961 Test: blockdev reset ...[2024-12-08 14:02:52.691503] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:07:49.961 [2024-12-08 14:02:52.694282] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:49.961 passed 00:07:49.961 Test: blockdev write read 8 blocks ...passed 00:07:49.961 Test: blockdev write read size > 128k ...passed 00:07:49.961 Test: blockdev write read invalid size ...passed 00:07:49.961 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:49.961 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:49.961 Test: blockdev write read max offset ...passed 00:07:49.961 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:49.961 Test: blockdev writev readv 8 blocks ...passed 00:07:49.961 Test: blockdev writev readv 30 x 1block ...passed 00:07:49.961 Test: blockdev writev readv block ...passed 00:07:49.961 Test: blockdev writev readv size > 128k ...passed 00:07:49.961 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:49.961 Test: blockdev comparev and writev ...[2024-12-08 14:02:52.713924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x268e0e000 len:0x1000 00:07:49.961 [2024-12-08 14:02:52.714002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:49.961 passed 00:07:49.961 Test: blockdev nvme passthru rw ...passed 00:07:49.961 Test: blockdev nvme passthru vendor specific ...[2024-12-08 14:02:52.716828] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:49.961 [2024-12-08 14:02:52.716856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:49.961 passed 00:07:49.961 Test: blockdev nvme admin passthru ...passed 00:07:49.961 Test: blockdev copy ...passed 00:07:49.961 Suite: bdevio tests on: Nvme2n3 00:07:49.961 Test: blockdev write read block ...passed 00:07:49.961 Test: blockdev write zeroes read block ...passed 00:07:49.961 Test: blockdev write zeroes read no split ...passed 00:07:49.961 Test: blockdev write zeroes read split ...passed 00:07:49.961 Test: blockdev write zeroes read split partial ...passed 00:07:49.961 Test: blockdev reset ...[2024-12-08 14:02:52.773844] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:07:49.961 passed 00:07:49.961 Test: blockdev write read 8 blocks ...[2024-12-08 14:02:52.777906] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:49.961 passed 00:07:49.961 Test: blockdev write read size > 128k ...passed 00:07:49.961 Test: blockdev write read invalid size ...passed 00:07:49.961 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:49.961 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:49.961 Test: blockdev write read max offset ...passed 00:07:49.961 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:49.961 Test: blockdev writev readv 8 blocks ...passed 00:07:49.961 Test: blockdev writev readv 30 x 1block ...passed 00:07:49.961 Test: blockdev writev readv block ...passed 00:07:49.961 Test: blockdev writev readv size > 128k ...passed 00:07:49.961 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:49.961 Test: blockdev comparev and writev ...[2024-12-08 14:02:52.796748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x268e0a000 len:0x1000 00:07:49.961 [2024-12-08 14:02:52.796792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:49.961 passed 00:07:49.961 Test: blockdev nvme passthru rw ...passed 00:07:49.961 Test: blockdev nvme passthru vendor specific ...passed 00:07:49.961 Test: blockdev nvme admin passthru ...[2024-12-08 14:02:52.799381] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:49.961 [2024-12-08 14:02:52.799414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:49.961 passed 00:07:49.961 Test: blockdev copy ...passed 00:07:49.961 Suite: bdevio tests on: Nvme2n2 00:07:49.961 Test: blockdev write read block ...passed 00:07:49.961 Test: blockdev write zeroes read block ...passed 00:07:49.961 Test: blockdev write zeroes read no split ...passed 00:07:49.961 Test: blockdev write zeroes read split ...passed 00:07:49.961 Test: blockdev write zeroes read split partial ...passed 00:07:49.961 Test: blockdev reset ...[2024-12-08 14:02:52.854648] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:07:49.961 passed 00:07:49.961 Test: blockdev write read 8 blocks ...[2024-12-08 14:02:52.857909] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:49.961 passed 00:07:49.961 Test: blockdev write read size > 128k ...passed 00:07:49.961 Test: blockdev write read invalid size ...passed 00:07:49.961 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:49.961 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:49.961 Test: blockdev write read max offset ...passed 00:07:49.962 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:49.962 Test: blockdev writev readv 8 blocks ...passed 00:07:49.962 Test: blockdev writev readv 30 x 1block ...passed 00:07:49.962 Test: blockdev writev readv block ...passed 00:07:49.962 Test: blockdev writev readv size > 128k ...passed 00:07:49.962 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:49.962 Test: blockdev comparev and writev ...[2024-12-08 14:02:52.875741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26aa06000 len:0x1000 00:07:49.962 [2024-12-08 14:02:52.875780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:49.962 passed 00:07:49.962 Test: blockdev nvme passthru rw ...passed 00:07:49.962 Test: blockdev nvme passthru vendor specific ...passed 00:07:49.962 Test: blockdev nvme admin passthru ...[2024-12-08 14:02:52.878627] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:49.962 [2024-12-08 14:02:52.878659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:50.224 passed 00:07:50.224 Test: blockdev copy ...passed 00:07:50.224 Suite: bdevio tests on: Nvme2n1 00:07:50.224 Test: blockdev write read block ...passed 00:07:50.224 Test: blockdev write zeroes read block ...passed 00:07:50.224 Test: blockdev write zeroes read no split ...passed 00:07:50.224 Test: blockdev write zeroes read split ...passed 00:07:50.224 Test: blockdev write zeroes read split partial ...passed 00:07:50.224 Test: blockdev reset ...[2024-12-08 14:02:52.937703] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:07:50.224 [2024-12-08 14:02:52.940840] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:50.224 passed 00:07:50.224 Test: blockdev write read 8 blocks ...passed 00:07:50.224 Test: blockdev write read size > 128k ...passed 00:07:50.224 Test: blockdev write read invalid size ...passed 00:07:50.224 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:50.224 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:50.224 Test: blockdev write read max offset ...passed 00:07:50.224 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:50.224 Test: blockdev writev readv 8 blocks ...passed 00:07:50.224 Test: blockdev writev readv 30 x 1block ...passed 00:07:50.224 Test: blockdev writev readv block ...passed 00:07:50.224 Test: blockdev writev readv size > 128k ...passed 00:07:50.224 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:50.224 Test: blockdev comparev and writev ...[2024-12-08 14:02:52.959065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26aa01000 len:0x1000 00:07:50.224 [2024-12-08 14:02:52.959105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:50.224 passed 00:07:50.224 Test: blockdev nvme passthru rw ...passed 00:07:50.224 Test: blockdev nvme passthru vendor specific ...[2024-12-08 14:02:52.962061] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:50.224 [2024-12-08 14:02:52.962097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:50.224 passed 00:07:50.224 Test: blockdev nvme admin passthru ...passed 00:07:50.224 Test: blockdev copy ...passed 00:07:50.224 Suite: bdevio tests on: Nvme1n1 00:07:50.224 Test: blockdev write read block ...passed 00:07:50.224 Test: blockdev write zeroes read block ...passed 00:07:50.224 Test: blockdev write zeroes read no split ...passed 00:07:50.224 Test: blockdev write zeroes read split ...passed 00:07:50.224 Test: blockdev write zeroes read split partial ...passed 00:07:50.224 Test: blockdev reset ...[2024-12-08 14:02:53.023241] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:07:50.224 passed 00:07:50.224 Test: blockdev write read 8 blocks ...[2024-12-08 14:02:53.026352] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:50.224 passed 00:07:50.224 Test: blockdev write read size > 128k ...passed 00:07:50.224 Test: blockdev write read invalid size ...passed 00:07:50.224 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:50.224 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:50.224 Test: blockdev write read max offset ...passed 00:07:50.224 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:50.224 Test: blockdev writev readv 8 blocks ...passed 00:07:50.224 Test: blockdev writev readv 30 x 1block ...passed 00:07:50.224 Test: blockdev writev readv block ...passed 00:07:50.224 Test: blockdev writev readv size > 128k ...passed 00:07:50.224 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:50.224 Test: blockdev comparev and writev ...[2024-12-08 14:02:53.043199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x269806000 len:0x1000 00:07:50.224 [2024-12-08 14:02:53.043237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:50.224 passed 00:07:50.224 Test: blockdev nvme passthru rw ...passed 00:07:50.224 Test: blockdev nvme passthru vendor specific ...[2024-12-08 14:02:53.045764] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:50.224 [2024-12-08 14:02:53.045793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:50.224 passed 00:07:50.224 Test: blockdev nvme admin passthru ...passed 00:07:50.224 Test: blockdev copy ...passed 00:07:50.224 Suite: bdevio tests on: Nvme0n1 00:07:50.224 Test: blockdev write read block ...passed 00:07:50.224 Test: blockdev write zeroes read block ...passed 00:07:50.224 Test: blockdev write zeroes read no split ...passed 00:07:50.224 Test: blockdev write zeroes read split ...passed 00:07:50.224 Test: blockdev write zeroes read split partial ...passed 00:07:50.224 Test: blockdev reset ...[2024-12-08 14:02:53.107963] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:07:50.224 passed 00:07:50.224 Test: blockdev write read 8 blocks ...[2024-12-08 14:02:53.111230] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:50.224 passed 00:07:50.224 Test: blockdev write read size > 128k ...passed 00:07:50.224 Test: blockdev write read invalid size ...passed 00:07:50.224 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:50.224 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:50.224 Test: blockdev write read max offset ...passed 00:07:50.224 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:50.224 Test: blockdev writev readv 8 blocks ...passed 00:07:50.224 Test: blockdev writev readv 30 x 1block ...passed 00:07:50.224 Test: blockdev writev readv block ...passed 00:07:50.224 Test: blockdev writev readv size > 128k ...passed 00:07:50.224 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:50.224 Test: blockdev comparev and writev ...passed 00:07:50.224 Test: blockdev nvme passthru rw ...[2024-12-08 14:02:53.127188] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:50.224 separate metadata which is not supported yet. 00:07:50.224 passed 00:07:50.224 Test: blockdev nvme passthru vendor specific ...[2024-12-08 14:02:53.128827] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:50.224 [2024-12-08 14:02:53.128864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:50.224 passed 00:07:50.224 Test: blockdev nvme admin passthru ...passed 00:07:50.224 Test: blockdev copy ...passed 00:07:50.224 00:07:50.224 Run Summary: Type Total Ran Passed Failed Inactive 00:07:50.224 suites 6 6 n/a 0 0 00:07:50.224 tests 138 138 138 0 0 00:07:50.224 asserts 893 893 893 0 n/a 00:07:50.224 00:07:50.224 Elapsed time = 1.277 seconds 00:07:50.485 0 00:07:50.485 14:02:53 -- bdev/blockdev.sh@293 -- # killprocess 60390 00:07:50.485 14:02:53 -- common/autotest_common.sh@936 -- # '[' -z 60390 ']' 00:07:50.485 14:02:53 -- common/autotest_common.sh@940 -- # kill -0 60390 00:07:50.485 14:02:53 -- common/autotest_common.sh@941 -- # uname 00:07:50.485 14:02:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:50.485 14:02:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60390 00:07:50.485 14:02:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:50.485 killing process with pid 60390 00:07:50.485 14:02:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:50.485 14:02:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60390' 00:07:50.485 14:02:53 -- common/autotest_common.sh@955 -- # kill 60390 00:07:50.485 14:02:53 -- common/autotest_common.sh@960 -- # wait 60390 00:07:51.058 14:02:53 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:07:51.058 00:07:51.058 real 0m2.809s 00:07:51.058 user 0m7.245s 00:07:51.058 sys 0m0.330s 00:07:51.058 14:02:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:51.058 14:02:53 -- common/autotest_common.sh@10 -- # set +x 00:07:51.058 ************************************ 00:07:51.058 END TEST bdev_bounds 00:07:51.058 ************************************ 00:07:51.058 14:02:53 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:51.058 14:02:53 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:07:51.058 14:02:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:51.058 14:02:53 -- common/autotest_common.sh@10 -- # set +x 00:07:51.058 ************************************ 00:07:51.058 START TEST bdev_nbd 00:07:51.058 ************************************ 00:07:51.058 14:02:53 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:51.058 14:02:53 -- bdev/blockdev.sh@298 -- # uname -s 00:07:51.058 14:02:53 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:07:51.058 14:02:53 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:51.058 14:02:53 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:51.058 14:02:53 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:51.058 14:02:53 -- bdev/blockdev.sh@302 -- # local bdev_all 00:07:51.058 14:02:53 -- bdev/blockdev.sh@303 -- # local bdev_num=6 00:07:51.058 14:02:53 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:07:51.058 14:02:53 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:51.058 14:02:53 -- bdev/blockdev.sh@309 -- # local nbd_all 00:07:51.058 14:02:53 -- bdev/blockdev.sh@310 -- # bdev_num=6 00:07:51.058 14:02:53 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:51.058 14:02:53 -- bdev/blockdev.sh@312 -- # local nbd_list 00:07:51.058 14:02:53 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:51.058 14:02:53 -- bdev/blockdev.sh@313 -- # local bdev_list 00:07:51.058 14:02:53 -- bdev/blockdev.sh@316 -- # nbd_pid=60457 00:07:51.058 14:02:53 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:51.058 14:02:53 -- bdev/blockdev.sh@318 -- # waitforlisten 60457 /var/tmp/spdk-nbd.sock 00:07:51.059 14:02:53 -- common/autotest_common.sh@829 -- # '[' -z 60457 ']' 00:07:51.059 14:02:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:51.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:51.059 14:02:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:51.059 14:02:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:51.059 14:02:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:51.059 14:02:53 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:51.059 14:02:53 -- common/autotest_common.sh@10 -- # set +x 00:07:51.059 [2024-12-08 14:02:53.916697] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:51.059 [2024-12-08 14:02:53.916805] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:51.319 [2024-12-08 14:02:54.063385] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.319 [2024-12-08 14:02:54.236656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.705 14:02:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:52.705 14:02:55 -- common/autotest_common.sh@862 -- # return 0 00:07:52.705 14:02:55 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:52.705 14:02:55 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:52.705 14:02:55 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:52.705 14:02:55 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:52.705 14:02:55 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:52.705 14:02:55 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:52.705 14:02:55 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:52.705 14:02:55 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:52.705 14:02:55 -- bdev/nbd_common.sh@24 -- # local i 00:07:52.705 14:02:55 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:52.705 14:02:55 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:52.705 14:02:55 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:52.705 14:02:55 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:52.705 14:02:55 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:52.705 14:02:55 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:52.705 14:02:55 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:52.705 14:02:55 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:52.705 14:02:55 -- common/autotest_common.sh@867 -- # local i 00:07:52.705 14:02:55 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:52.705 14:02:55 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:52.705 14:02:55 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:52.705 14:02:55 -- common/autotest_common.sh@871 -- # break 00:07:52.705 14:02:55 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:52.705 14:02:55 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:52.705 14:02:55 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:52.705 1+0 records in 00:07:52.705 1+0 records out 00:07:52.705 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000711673 s, 5.8 MB/s 00:07:52.705 14:02:55 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:52.705 14:02:55 -- common/autotest_common.sh@884 -- # size=4096 00:07:52.705 14:02:55 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:52.705 14:02:55 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:52.705 14:02:55 -- common/autotest_common.sh@887 -- # return 0 00:07:52.705 14:02:55 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:52.705 14:02:55 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:52.705 14:02:55 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:07:52.966 14:02:55 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:52.966 14:02:55 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:52.966 14:02:55 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:52.966 14:02:55 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:52.966 14:02:55 -- common/autotest_common.sh@867 -- # local i 00:07:52.966 14:02:55 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:52.966 14:02:55 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:52.966 14:02:55 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:52.966 14:02:55 -- common/autotest_common.sh@871 -- # break 00:07:52.966 14:02:55 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:52.966 14:02:55 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:52.966 14:02:55 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:52.966 1+0 records in 00:07:52.966 1+0 records out 00:07:52.966 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105921 s, 3.9 MB/s 00:07:52.966 14:02:55 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:52.966 14:02:55 -- common/autotest_common.sh@884 -- # size=4096 00:07:52.966 14:02:55 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:52.966 14:02:55 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:52.966 14:02:55 -- common/autotest_common.sh@887 -- # return 0 00:07:52.966 14:02:55 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:52.966 14:02:55 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:52.966 14:02:55 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:53.227 14:02:56 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:53.227 14:02:56 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:53.227 14:02:56 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:53.227 14:02:56 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:07:53.227 14:02:56 -- common/autotest_common.sh@867 -- # local i 00:07:53.227 14:02:56 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:53.227 14:02:56 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:53.227 14:02:56 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:07:53.227 14:02:56 -- common/autotest_common.sh@871 -- # break 00:07:53.227 14:02:56 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:53.227 14:02:56 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:53.227 14:02:56 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:53.227 1+0 records in 00:07:53.227 1+0 records out 00:07:53.227 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0010172 s, 4.0 MB/s 00:07:53.227 14:02:56 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:53.227 14:02:56 -- common/autotest_common.sh@884 -- # size=4096 00:07:53.227 14:02:56 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:53.227 14:02:56 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:53.227 14:02:56 -- common/autotest_common.sh@887 -- # return 0 00:07:53.227 14:02:56 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:53.227 14:02:56 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:53.227 14:02:56 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:53.508 14:02:56 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:53.508 14:02:56 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:53.508 14:02:56 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:53.508 14:02:56 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:07:53.508 14:02:56 -- common/autotest_common.sh@867 -- # local i 00:07:53.508 14:02:56 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:53.508 14:02:56 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:53.508 14:02:56 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:07:53.508 14:02:56 -- common/autotest_common.sh@871 -- # break 00:07:53.508 14:02:56 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:53.508 14:02:56 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:53.508 14:02:56 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:53.508 1+0 records in 00:07:53.508 1+0 records out 00:07:53.508 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00135245 s, 3.0 MB/s 00:07:53.508 14:02:56 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:53.508 14:02:56 -- common/autotest_common.sh@884 -- # size=4096 00:07:53.508 14:02:56 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:53.508 14:02:56 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:53.508 14:02:56 -- common/autotest_common.sh@887 -- # return 0 00:07:53.508 14:02:56 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:53.508 14:02:56 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:53.508 14:02:56 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:53.769 14:02:56 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:53.769 14:02:56 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:53.769 14:02:56 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:53.769 14:02:56 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:07:53.769 14:02:56 -- common/autotest_common.sh@867 -- # local i 00:07:53.769 14:02:56 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:53.769 14:02:56 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:53.769 14:02:56 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:07:53.769 14:02:56 -- common/autotest_common.sh@871 -- # break 00:07:53.769 14:02:56 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:53.769 14:02:56 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:53.769 14:02:56 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:53.769 1+0 records in 00:07:53.769 1+0 records out 00:07:53.769 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000467252 s, 8.8 MB/s 00:07:53.769 14:02:56 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:53.769 14:02:56 -- common/autotest_common.sh@884 -- # size=4096 00:07:53.769 14:02:56 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:53.769 14:02:56 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:53.769 14:02:56 -- common/autotest_common.sh@887 -- # return 0 00:07:53.769 14:02:56 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:53.769 14:02:56 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:53.769 14:02:56 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:54.030 14:02:56 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:54.030 14:02:56 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:54.030 14:02:56 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:54.030 14:02:56 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:07:54.030 14:02:56 -- common/autotest_common.sh@867 -- # local i 00:07:54.030 14:02:56 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:54.030 14:02:56 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:54.030 14:02:56 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:07:54.030 14:02:56 -- common/autotest_common.sh@871 -- # break 00:07:54.030 14:02:56 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:54.030 14:02:56 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:54.030 14:02:56 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:54.030 1+0 records in 00:07:54.030 1+0 records out 00:07:54.030 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000749526 s, 5.5 MB/s 00:07:54.030 14:02:56 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:54.030 14:02:56 -- common/autotest_common.sh@884 -- # size=4096 00:07:54.030 14:02:56 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:54.030 14:02:56 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:54.030 14:02:56 -- common/autotest_common.sh@887 -- # return 0 00:07:54.030 14:02:56 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:54.030 14:02:56 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:54.030 14:02:56 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:54.030 14:02:56 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:54.030 { 00:07:54.030 "nbd_device": "/dev/nbd0", 00:07:54.030 "bdev_name": "Nvme0n1" 00:07:54.030 }, 00:07:54.030 { 00:07:54.030 "nbd_device": "/dev/nbd1", 00:07:54.030 "bdev_name": "Nvme1n1" 00:07:54.030 }, 00:07:54.030 { 00:07:54.030 "nbd_device": "/dev/nbd2", 00:07:54.030 "bdev_name": "Nvme2n1" 00:07:54.030 }, 00:07:54.030 { 00:07:54.030 "nbd_device": "/dev/nbd3", 00:07:54.030 "bdev_name": "Nvme2n2" 00:07:54.030 }, 00:07:54.030 { 00:07:54.030 "nbd_device": "/dev/nbd4", 00:07:54.030 "bdev_name": "Nvme2n3" 00:07:54.030 }, 00:07:54.030 { 00:07:54.030 "nbd_device": "/dev/nbd5", 00:07:54.030 "bdev_name": "Nvme3n1" 00:07:54.030 } 00:07:54.030 ]' 00:07:54.030 14:02:56 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:54.030 14:02:56 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:54.030 14:02:56 -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:54.030 { 00:07:54.030 "nbd_device": "/dev/nbd0", 00:07:54.030 "bdev_name": "Nvme0n1" 00:07:54.030 }, 00:07:54.030 { 00:07:54.030 "nbd_device": "/dev/nbd1", 00:07:54.030 "bdev_name": "Nvme1n1" 00:07:54.030 }, 00:07:54.030 { 00:07:54.030 "nbd_device": "/dev/nbd2", 00:07:54.030 "bdev_name": "Nvme2n1" 00:07:54.030 }, 00:07:54.030 { 00:07:54.030 "nbd_device": "/dev/nbd3", 00:07:54.030 "bdev_name": "Nvme2n2" 00:07:54.030 }, 00:07:54.030 { 00:07:54.030 "nbd_device": "/dev/nbd4", 00:07:54.030 "bdev_name": "Nvme2n3" 00:07:54.030 }, 00:07:54.030 { 00:07:54.030 "nbd_device": "/dev/nbd5", 00:07:54.030 "bdev_name": "Nvme3n1" 00:07:54.030 } 00:07:54.030 ]' 00:07:54.030 14:02:56 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:07:54.030 14:02:56 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:54.030 14:02:56 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:07:54.030 14:02:56 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:54.030 14:02:56 -- bdev/nbd_common.sh@51 -- # local i 00:07:54.030 14:02:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:54.030 14:02:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:54.290 14:02:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:54.290 14:02:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:54.290 14:02:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:54.290 14:02:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:54.290 14:02:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:54.290 14:02:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:54.290 14:02:57 -- bdev/nbd_common.sh@41 -- # break 00:07:54.290 14:02:57 -- bdev/nbd_common.sh@45 -- # return 0 00:07:54.290 14:02:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:54.290 14:02:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:54.551 14:02:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:54.551 14:02:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:54.551 14:02:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:54.551 14:02:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:54.551 14:02:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:54.551 14:02:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:54.551 14:02:57 -- bdev/nbd_common.sh@41 -- # break 00:07:54.551 14:02:57 -- bdev/nbd_common.sh@45 -- # return 0 00:07:54.551 14:02:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:54.551 14:02:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:54.812 14:02:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:54.812 14:02:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:54.812 14:02:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:54.812 14:02:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:54.812 14:02:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:54.812 14:02:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:54.812 14:02:57 -- bdev/nbd_common.sh@41 -- # break 00:07:54.812 14:02:57 -- bdev/nbd_common.sh@45 -- # return 0 00:07:54.812 14:02:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:54.813 14:02:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:55.073 14:02:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:55.073 14:02:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:55.073 14:02:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:55.073 14:02:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:55.073 14:02:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:55.073 14:02:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:55.073 14:02:57 -- bdev/nbd_common.sh@41 -- # break 00:07:55.073 14:02:57 -- bdev/nbd_common.sh@45 -- # return 0 00:07:55.073 14:02:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:55.073 14:02:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:55.335 14:02:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:55.335 14:02:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:55.335 14:02:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:55.335 14:02:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:55.335 14:02:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:55.335 14:02:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:55.335 14:02:58 -- bdev/nbd_common.sh@41 -- # break 00:07:55.335 14:02:58 -- bdev/nbd_common.sh@45 -- # return 0 00:07:55.335 14:02:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:55.335 14:02:58 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:55.335 14:02:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:55.335 14:02:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:55.335 14:02:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:55.335 14:02:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:55.335 14:02:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:55.335 14:02:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:55.335 14:02:58 -- bdev/nbd_common.sh@41 -- # break 00:07:55.335 14:02:58 -- bdev/nbd_common.sh@45 -- # return 0 00:07:55.335 14:02:58 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:55.335 14:02:58 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:55.335 14:02:58 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:55.595 14:02:58 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:55.595 14:02:58 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:55.595 14:02:58 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:55.595 14:02:58 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:55.595 14:02:58 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:55.595 14:02:58 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:55.595 14:02:58 -- bdev/nbd_common.sh@65 -- # true 00:07:55.595 14:02:58 -- bdev/nbd_common.sh@65 -- # count=0 00:07:55.595 14:02:58 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:55.595 14:02:58 -- bdev/nbd_common.sh@122 -- # count=0 00:07:55.595 14:02:58 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:55.595 14:02:58 -- bdev/nbd_common.sh@127 -- # return 0 00:07:55.595 14:02:58 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:55.595 14:02:58 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:55.596 14:02:58 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:55.596 14:02:58 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:55.596 14:02:58 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:55.596 14:02:58 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:55.596 14:02:58 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:55.596 14:02:58 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:55.596 14:02:58 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:55.596 14:02:58 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:55.596 14:02:58 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:55.596 14:02:58 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:55.596 14:02:58 -- bdev/nbd_common.sh@12 -- # local i 00:07:55.596 14:02:58 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:55.596 14:02:58 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:55.596 14:02:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:55.855 /dev/nbd0 00:07:55.855 14:02:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:55.855 14:02:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:55.855 14:02:58 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:55.855 14:02:58 -- common/autotest_common.sh@867 -- # local i 00:07:55.855 14:02:58 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:55.855 14:02:58 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:55.855 14:02:58 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:55.855 14:02:58 -- common/autotest_common.sh@871 -- # break 00:07:55.855 14:02:58 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:55.855 14:02:58 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:55.855 14:02:58 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:55.855 1+0 records in 00:07:55.855 1+0 records out 00:07:55.855 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00110682 s, 3.7 MB/s 00:07:55.855 14:02:58 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:55.855 14:02:58 -- common/autotest_common.sh@884 -- # size=4096 00:07:55.855 14:02:58 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:55.855 14:02:58 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:55.855 14:02:58 -- common/autotest_common.sh@887 -- # return 0 00:07:55.855 14:02:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:55.855 14:02:58 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:55.855 14:02:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:07:56.125 /dev/nbd1 00:07:56.125 14:02:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:56.125 14:02:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:56.125 14:02:58 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:56.125 14:02:58 -- common/autotest_common.sh@867 -- # local i 00:07:56.125 14:02:58 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:56.125 14:02:58 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:56.125 14:02:58 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:56.125 14:02:58 -- common/autotest_common.sh@871 -- # break 00:07:56.125 14:02:58 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:56.125 14:02:58 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:56.125 14:02:58 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:56.125 1+0 records in 00:07:56.125 1+0 records out 00:07:56.125 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00086784 s, 4.7 MB/s 00:07:56.125 14:02:58 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:56.125 14:02:58 -- common/autotest_common.sh@884 -- # size=4096 00:07:56.125 14:02:58 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:56.125 14:02:58 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:56.125 14:02:58 -- common/autotest_common.sh@887 -- # return 0 00:07:56.125 14:02:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:56.125 14:02:58 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:56.125 14:02:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:07:56.434 /dev/nbd10 00:07:56.434 14:02:59 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:56.435 14:02:59 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:56.435 14:02:59 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:07:56.435 14:02:59 -- common/autotest_common.sh@867 -- # local i 00:07:56.435 14:02:59 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:56.435 14:02:59 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:56.435 14:02:59 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:07:56.435 14:02:59 -- common/autotest_common.sh@871 -- # break 00:07:56.435 14:02:59 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:56.435 14:02:59 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:56.435 14:02:59 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:56.435 1+0 records in 00:07:56.435 1+0 records out 00:07:56.435 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00116837 s, 3.5 MB/s 00:07:56.435 14:02:59 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:56.435 14:02:59 -- common/autotest_common.sh@884 -- # size=4096 00:07:56.435 14:02:59 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:56.435 14:02:59 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:56.435 14:02:59 -- common/autotest_common.sh@887 -- # return 0 00:07:56.435 14:02:59 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:56.435 14:02:59 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:56.435 14:02:59 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:07:56.696 /dev/nbd11 00:07:56.696 14:02:59 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:56.696 14:02:59 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:56.696 14:02:59 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:07:56.696 14:02:59 -- common/autotest_common.sh@867 -- # local i 00:07:56.696 14:02:59 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:56.696 14:02:59 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:56.696 14:02:59 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:07:56.696 14:02:59 -- common/autotest_common.sh@871 -- # break 00:07:56.696 14:02:59 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:56.696 14:02:59 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:56.696 14:02:59 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:56.696 1+0 records in 00:07:56.696 1+0 records out 00:07:56.696 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00113993 s, 3.6 MB/s 00:07:56.696 14:02:59 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:56.696 14:02:59 -- common/autotest_common.sh@884 -- # size=4096 00:07:56.696 14:02:59 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:56.696 14:02:59 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:56.696 14:02:59 -- common/autotest_common.sh@887 -- # return 0 00:07:56.696 14:02:59 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:56.696 14:02:59 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:56.696 14:02:59 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:07:56.696 /dev/nbd12 00:07:56.959 14:02:59 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:56.959 14:02:59 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:56.959 14:02:59 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:07:56.959 14:02:59 -- common/autotest_common.sh@867 -- # local i 00:07:56.959 14:02:59 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:56.959 14:02:59 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:56.959 14:02:59 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:07:56.959 14:02:59 -- common/autotest_common.sh@871 -- # break 00:07:56.959 14:02:59 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:56.959 14:02:59 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:56.959 14:02:59 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:56.959 1+0 records in 00:07:56.959 1+0 records out 00:07:56.959 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00117459 s, 3.5 MB/s 00:07:56.959 14:02:59 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:56.959 14:02:59 -- common/autotest_common.sh@884 -- # size=4096 00:07:56.959 14:02:59 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:56.959 14:02:59 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:56.959 14:02:59 -- common/autotest_common.sh@887 -- # return 0 00:07:56.959 14:02:59 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:56.959 14:02:59 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:56.959 14:02:59 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:07:56.959 /dev/nbd13 00:07:56.959 14:02:59 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:56.959 14:02:59 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:56.959 14:02:59 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:07:56.959 14:02:59 -- common/autotest_common.sh@867 -- # local i 00:07:56.959 14:02:59 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:56.959 14:02:59 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:56.959 14:02:59 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:07:56.959 14:02:59 -- common/autotest_common.sh@871 -- # break 00:07:56.959 14:02:59 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:56.959 14:02:59 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:56.959 14:02:59 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:56.959 1+0 records in 00:07:56.959 1+0 records out 00:07:56.959 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108016 s, 3.8 MB/s 00:07:56.959 14:02:59 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:56.959 14:02:59 -- common/autotest_common.sh@884 -- # size=4096 00:07:56.959 14:02:59 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:57.221 14:02:59 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:57.221 14:02:59 -- common/autotest_common.sh@887 -- # return 0 00:07:57.221 14:02:59 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:57.221 14:02:59 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:57.221 14:02:59 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:57.221 14:02:59 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:57.221 14:02:59 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:57.221 14:03:00 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:57.221 { 00:07:57.221 "nbd_device": "/dev/nbd0", 00:07:57.221 "bdev_name": "Nvme0n1" 00:07:57.221 }, 00:07:57.221 { 00:07:57.221 "nbd_device": "/dev/nbd1", 00:07:57.221 "bdev_name": "Nvme1n1" 00:07:57.221 }, 00:07:57.221 { 00:07:57.221 "nbd_device": "/dev/nbd10", 00:07:57.221 "bdev_name": "Nvme2n1" 00:07:57.221 }, 00:07:57.221 { 00:07:57.221 "nbd_device": "/dev/nbd11", 00:07:57.221 "bdev_name": "Nvme2n2" 00:07:57.221 }, 00:07:57.221 { 00:07:57.221 "nbd_device": "/dev/nbd12", 00:07:57.221 "bdev_name": "Nvme2n3" 00:07:57.221 }, 00:07:57.221 { 00:07:57.221 "nbd_device": "/dev/nbd13", 00:07:57.221 "bdev_name": "Nvme3n1" 00:07:57.221 } 00:07:57.221 ]' 00:07:57.221 14:03:00 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:57.221 14:03:00 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:57.221 { 00:07:57.221 "nbd_device": "/dev/nbd0", 00:07:57.221 "bdev_name": "Nvme0n1" 00:07:57.221 }, 00:07:57.221 { 00:07:57.221 "nbd_device": "/dev/nbd1", 00:07:57.221 "bdev_name": "Nvme1n1" 00:07:57.221 }, 00:07:57.221 { 00:07:57.221 "nbd_device": "/dev/nbd10", 00:07:57.221 "bdev_name": "Nvme2n1" 00:07:57.221 }, 00:07:57.221 { 00:07:57.221 "nbd_device": "/dev/nbd11", 00:07:57.221 "bdev_name": "Nvme2n2" 00:07:57.221 }, 00:07:57.221 { 00:07:57.221 "nbd_device": "/dev/nbd12", 00:07:57.221 "bdev_name": "Nvme2n3" 00:07:57.221 }, 00:07:57.221 { 00:07:57.221 "nbd_device": "/dev/nbd13", 00:07:57.221 "bdev_name": "Nvme3n1" 00:07:57.221 } 00:07:57.221 ]' 00:07:57.221 14:03:00 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:57.221 /dev/nbd1 00:07:57.221 /dev/nbd10 00:07:57.221 /dev/nbd11 00:07:57.221 /dev/nbd12 00:07:57.221 /dev/nbd13' 00:07:57.221 14:03:00 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:57.221 /dev/nbd1 00:07:57.221 /dev/nbd10 00:07:57.221 /dev/nbd11 00:07:57.221 /dev/nbd12 00:07:57.221 /dev/nbd13' 00:07:57.221 14:03:00 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:57.221 14:03:00 -- bdev/nbd_common.sh@65 -- # count=6 00:07:57.221 14:03:00 -- bdev/nbd_common.sh@66 -- # echo 6 00:07:57.221 14:03:00 -- bdev/nbd_common.sh@95 -- # count=6 00:07:57.221 14:03:00 -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:07:57.221 14:03:00 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:07:57.221 14:03:00 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:57.221 14:03:00 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:57.221 14:03:00 -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:57.221 14:03:00 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:57.221 14:03:00 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:57.221 14:03:00 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:57.221 256+0 records in 00:07:57.221 256+0 records out 00:07:57.221 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0090996 s, 115 MB/s 00:07:57.221 14:03:00 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:57.221 14:03:00 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:57.793 256+0 records in 00:07:57.793 256+0 records out 00:07:57.793 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.267735 s, 3.9 MB/s 00:07:57.793 14:03:00 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:57.793 14:03:00 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:57.793 256+0 records in 00:07:57.793 256+0 records out 00:07:57.793 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.276509 s, 3.8 MB/s 00:07:57.793 14:03:00 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:57.793 14:03:00 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:58.055 256+0 records in 00:07:58.055 256+0 records out 00:07:58.055 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.259876 s, 4.0 MB/s 00:07:58.055 14:03:00 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:58.055 14:03:00 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:58.316 256+0 records in 00:07:58.316 256+0 records out 00:07:58.316 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.264438 s, 4.0 MB/s 00:07:58.316 14:03:01 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:58.316 14:03:01 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:58.577 256+0 records in 00:07:58.577 256+0 records out 00:07:58.577 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.233285 s, 4.5 MB/s 00:07:58.577 14:03:01 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:58.577 14:03:01 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:58.838 256+0 records in 00:07:58.838 256+0 records out 00:07:58.838 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.236416 s, 4.4 MB/s 00:07:58.838 14:03:01 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:07:58.838 14:03:01 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:58.838 14:03:01 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:58.838 14:03:01 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:58.838 14:03:01 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:58.838 14:03:01 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:58.838 14:03:01 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:58.838 14:03:01 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:58.838 14:03:01 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:58.838 14:03:01 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:58.838 14:03:01 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:58.838 14:03:01 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:58.838 14:03:01 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:58.838 14:03:01 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:58.838 14:03:01 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:58.838 14:03:01 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:58.838 14:03:01 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:58.838 14:03:01 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:58.838 14:03:01 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:58.838 14:03:01 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:58.838 14:03:01 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:58.838 14:03:01 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:58.838 14:03:01 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:58.838 14:03:01 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:58.838 14:03:01 -- bdev/nbd_common.sh@51 -- # local i 00:07:58.838 14:03:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:58.838 14:03:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:59.097 14:03:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:59.097 14:03:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:59.097 14:03:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:59.097 14:03:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:59.097 14:03:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:59.097 14:03:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:59.097 14:03:01 -- bdev/nbd_common.sh@41 -- # break 00:07:59.097 14:03:01 -- bdev/nbd_common.sh@45 -- # return 0 00:07:59.097 14:03:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:59.097 14:03:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:59.355 14:03:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:59.355 14:03:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:59.355 14:03:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:59.355 14:03:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:59.355 14:03:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:59.355 14:03:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:59.355 14:03:02 -- bdev/nbd_common.sh@41 -- # break 00:07:59.355 14:03:02 -- bdev/nbd_common.sh@45 -- # return 0 00:07:59.355 14:03:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:59.355 14:03:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:59.614 14:03:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:59.614 14:03:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:59.614 14:03:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:59.614 14:03:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:59.614 14:03:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:59.614 14:03:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:59.614 14:03:02 -- bdev/nbd_common.sh@41 -- # break 00:07:59.614 14:03:02 -- bdev/nbd_common.sh@45 -- # return 0 00:07:59.614 14:03:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:59.614 14:03:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:59.874 14:03:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:59.874 14:03:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:59.874 14:03:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:59.874 14:03:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:59.874 14:03:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:59.874 14:03:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:59.874 14:03:02 -- bdev/nbd_common.sh@41 -- # break 00:07:59.875 14:03:02 -- bdev/nbd_common.sh@45 -- # return 0 00:07:59.875 14:03:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:59.875 14:03:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:59.875 14:03:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:59.875 14:03:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:59.875 14:03:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:59.875 14:03:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:59.875 14:03:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:59.875 14:03:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:59.875 14:03:02 -- bdev/nbd_common.sh@41 -- # break 00:07:59.875 14:03:02 -- bdev/nbd_common.sh@45 -- # return 0 00:07:59.875 14:03:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:59.875 14:03:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:00.135 14:03:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:00.135 14:03:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:00.135 14:03:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:00.135 14:03:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:00.135 14:03:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:00.135 14:03:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:00.135 14:03:02 -- bdev/nbd_common.sh@41 -- # break 00:08:00.135 14:03:02 -- bdev/nbd_common.sh@45 -- # return 0 00:08:00.135 14:03:02 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:00.135 14:03:02 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:00.135 14:03:02 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:00.397 14:03:03 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:00.397 14:03:03 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:00.397 14:03:03 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:00.397 14:03:03 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:00.397 14:03:03 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:00.397 14:03:03 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:00.397 14:03:03 -- bdev/nbd_common.sh@65 -- # true 00:08:00.397 14:03:03 -- bdev/nbd_common.sh@65 -- # count=0 00:08:00.397 14:03:03 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:00.397 14:03:03 -- bdev/nbd_common.sh@104 -- # count=0 00:08:00.397 14:03:03 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:00.397 14:03:03 -- bdev/nbd_common.sh@109 -- # return 0 00:08:00.397 14:03:03 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:00.397 14:03:03 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:00.397 14:03:03 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:00.397 14:03:03 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:08:00.397 14:03:03 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:08:00.397 14:03:03 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:00.657 malloc_lvol_verify 00:08:00.657 14:03:03 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:00.916 495e64a8-7542-4a56-8811-0e835b869d17 00:08:00.916 14:03:03 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:00.916 fcb9acce-3c6b-48a9-8cea-fe791d5435f2 00:08:00.917 14:03:03 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:01.177 /dev/nbd0 00:08:01.177 14:03:04 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:08:01.177 mke2fs 1.47.0 (5-Feb-2023) 00:08:01.177 Discarding device blocks: 0/4096 done 00:08:01.177 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:01.177 00:08:01.177 Allocating group tables: 0/1 done 00:08:01.177 Writing inode tables: 0/1 done 00:08:01.177 Creating journal (1024 blocks): done 00:08:01.177 Writing superblocks and filesystem accounting information: 0/1 done 00:08:01.177 00:08:01.177 14:03:04 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:08:01.177 14:03:04 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:01.177 14:03:04 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:01.177 14:03:04 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:01.177 14:03:04 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:01.177 14:03:04 -- bdev/nbd_common.sh@51 -- # local i 00:08:01.177 14:03:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:01.177 14:03:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:01.437 14:03:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:01.437 14:03:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:01.437 14:03:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:01.437 14:03:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:01.437 14:03:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:01.437 14:03:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:01.437 14:03:04 -- bdev/nbd_common.sh@41 -- # break 00:08:01.437 14:03:04 -- bdev/nbd_common.sh@45 -- # return 0 00:08:01.437 14:03:04 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:08:01.437 14:03:04 -- bdev/nbd_common.sh@147 -- # return 0 00:08:01.437 14:03:04 -- bdev/blockdev.sh@324 -- # killprocess 60457 00:08:01.437 14:03:04 -- common/autotest_common.sh@936 -- # '[' -z 60457 ']' 00:08:01.437 14:03:04 -- common/autotest_common.sh@940 -- # kill -0 60457 00:08:01.437 14:03:04 -- common/autotest_common.sh@941 -- # uname 00:08:01.437 14:03:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:01.437 14:03:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60457 00:08:01.437 14:03:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:01.437 killing process with pid 60457 00:08:01.437 14:03:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:01.437 14:03:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60457' 00:08:01.437 14:03:04 -- common/autotest_common.sh@955 -- # kill 60457 00:08:01.437 14:03:04 -- common/autotest_common.sh@960 -- # wait 60457 00:08:02.375 14:03:04 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:08:02.375 00:08:02.375 real 0m11.130s 00:08:02.375 user 0m14.829s 00:08:02.375 sys 0m3.471s 00:08:02.375 14:03:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:02.375 14:03:04 -- common/autotest_common.sh@10 -- # set +x 00:08:02.375 ************************************ 00:08:02.375 END TEST bdev_nbd 00:08:02.375 ************************************ 00:08:02.375 14:03:05 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:08:02.375 14:03:05 -- bdev/blockdev.sh@762 -- # '[' nvme = nvme ']' 00:08:02.375 14:03:05 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:02.375 skipping fio tests on NVMe due to multi-ns failures. 00:08:02.375 14:03:05 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:02.375 14:03:05 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:02.375 14:03:05 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:08:02.375 14:03:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:02.375 14:03:05 -- common/autotest_common.sh@10 -- # set +x 00:08:02.375 ************************************ 00:08:02.375 START TEST bdev_verify 00:08:02.375 ************************************ 00:08:02.375 14:03:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:02.375 [2024-12-08 14:03:05.096775] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:02.375 [2024-12-08 14:03:05.096900] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60844 ] 00:08:02.375 [2024-12-08 14:03:05.250557] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:02.635 [2024-12-08 14:03:05.521961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:02.635 [2024-12-08 14:03:05.522069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.575 Running I/O for 5 seconds... 00:08:08.882 00:08:08.882 Latency(us) 00:08:08.882 [2024-12-08T14:03:11.802Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:08.882 [2024-12-08T14:03:11.802Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:08.882 Verification LBA range: start 0x0 length 0xbd0bd 00:08:08.882 Nvme0n1 : 5.05 2371.04 9.26 0.00 0.00 53803.02 10233.70 59284.87 00:08:08.882 [2024-12-08T14:03:11.802Z] Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:08.882 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:08:08.882 Nvme0n1 : 5.05 2259.40 8.83 0.00 0.00 56521.14 5595.77 74610.22 00:08:08.882 [2024-12-08T14:03:11.802Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:08.882 Verification LBA range: start 0x0 length 0xa0000 00:08:08.882 Nvme1n1 : 5.05 2370.36 9.26 0.00 0.00 53787.89 9023.80 57268.38 00:08:08.882 [2024-12-08T14:03:11.802Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:08.882 Verification LBA range: start 0xa0000 length 0xa0000 00:08:08.882 Nvme1n1 : 5.06 2258.65 8.82 0.00 0.00 56419.13 6503.19 74610.22 00:08:08.882 [2024-12-08T14:03:11.802Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:08.882 Verification LBA range: start 0x0 length 0x80000 00:08:08.882 Nvme2n1 : 5.06 2375.65 9.28 0.00 0.00 53596.34 4461.49 50412.31 00:08:08.882 [2024-12-08T14:03:11.802Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:08.882 Verification LBA range: start 0x80000 length 0x80000 00:08:08.882 Nvme2n1 : 5.06 2257.93 8.82 0.00 0.00 56335.39 6805.66 72190.42 00:08:08.882 [2024-12-08T14:03:11.802Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:08.882 Verification LBA range: start 0x0 length 0x80000 00:08:08.882 Nvme2n2 : 5.06 2374.38 9.27 0.00 0.00 53562.21 6276.33 51622.20 00:08:08.882 [2024-12-08T14:03:11.802Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:08.882 Verification LBA range: start 0x80000 length 0x80000 00:08:08.882 Nvme2n2 : 5.06 2256.09 8.81 0.00 0.00 56300.77 10334.52 68560.74 00:08:08.882 [2024-12-08T14:03:11.802Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:08.882 Verification LBA range: start 0x0 length 0x80000 00:08:08.882 Nvme2n3 : 5.06 2372.50 9.27 0.00 0.00 53517.49 8368.44 52025.50 00:08:08.882 [2024-12-08T14:03:11.802Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:08.882 Verification LBA range: start 0x80000 length 0x80000 00:08:08.882 Nvme2n3 : 5.07 2254.24 8.81 0.00 0.00 56270.09 13208.02 67754.14 00:08:08.882 [2024-12-08T14:03:11.802Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:08.882 Verification LBA range: start 0x0 length 0x20000 00:08:08.882 Nvme3n1 : 5.07 2370.54 9.26 0.00 0.00 53488.90 11342.77 52025.50 00:08:08.882 [2024-12-08T14:03:11.802Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:08.882 Verification LBA range: start 0x20000 length 0x20000 00:08:08.882 Nvme3n1 : 5.07 2252.49 8.80 0.00 0.00 56239.52 14417.92 67754.14 00:08:08.882 [2024-12-08T14:03:11.802Z] =================================================================================================================== 00:08:08.882 [2024-12-08T14:03:11.802Z] Total : 27773.27 108.49 0.00 0.00 54953.04 4461.49 74610.22 00:08:23.849 00:08:23.849 real 0m21.521s 00:08:23.849 user 0m41.440s 00:08:23.849 sys 0m0.491s 00:08:23.849 14:03:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:23.849 14:03:26 -- common/autotest_common.sh@10 -- # set +x 00:08:23.849 ************************************ 00:08:23.849 END TEST bdev_verify 00:08:23.849 ************************************ 00:08:23.849 14:03:26 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:23.849 14:03:26 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:08:23.849 14:03:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:23.849 14:03:26 -- common/autotest_common.sh@10 -- # set +x 00:08:23.849 ************************************ 00:08:23.849 START TEST bdev_verify_big_io 00:08:23.849 ************************************ 00:08:23.849 14:03:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:23.849 [2024-12-08 14:03:26.664863] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:23.849 [2024-12-08 14:03:26.664972] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61028 ] 00:08:24.107 [2024-12-08 14:03:26.813197] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:24.107 [2024-12-08 14:03:27.008788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:24.107 [2024-12-08 14:03:27.008856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.039 Running I/O for 5 seconds... 00:08:30.304 00:08:30.304 Latency(us) 00:08:30.304 [2024-12-08T14:03:33.224Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:30.304 [2024-12-08T14:03:33.224Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:30.304 Verification LBA range: start 0x0 length 0xbd0b 00:08:30.304 Nvme0n1 : 5.32 302.79 18.92 0.00 0.00 413218.10 54848.59 732390.01 00:08:30.304 [2024-12-08T14:03:33.224Z] Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:30.304 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:30.304 Nvme0n1 : 5.30 329.68 20.60 0.00 0.00 380315.53 71383.83 554938.68 00:08:30.304 [2024-12-08T14:03:33.224Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:30.304 Verification LBA range: start 0x0 length 0xa000 00:08:30.304 Nvme1n1 : 5.32 302.71 18.92 0.00 0.00 406733.01 55251.89 667862.25 00:08:30.304 [2024-12-08T14:03:33.224Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:30.304 Verification LBA range: start 0xa000 length 0xa000 00:08:30.304 Nvme1n1 : 5.35 333.89 20.87 0.00 0.00 372515.36 47790.87 506542.87 00:08:30.304 [2024-12-08T14:03:33.224Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:30.304 Verification LBA range: start 0x0 length 0x8000 00:08:30.304 Nvme2n1 : 5.36 309.67 19.35 0.00 0.00 393259.09 36700.16 596881.72 00:08:30.304 [2024-12-08T14:03:33.224Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:30.304 Verification LBA range: start 0x8000 length 0x8000 00:08:30.304 Nvme2n1 : 5.35 333.77 20.86 0.00 0.00 368417.94 48597.46 461373.44 00:08:30.304 [2024-12-08T14:03:33.224Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:30.304 Verification LBA range: start 0x0 length 0x8000 00:08:30.304 Nvme2n2 : 5.37 317.68 19.86 0.00 0.00 379006.43 13712.15 529127.58 00:08:30.304 [2024-12-08T14:03:33.224Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:30.304 Verification LBA range: start 0x8000 length 0x8000 00:08:30.304 Nvme2n2 : 5.36 340.85 21.30 0.00 0.00 359047.61 3554.07 416204.01 00:08:30.304 [2024-12-08T14:03:33.224Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:30.304 Verification LBA range: start 0x0 length 0x8000 00:08:30.304 Nvme2n3 : 5.40 331.17 20.70 0.00 0.00 358610.18 10183.29 461373.44 00:08:30.304 [2024-12-08T14:03:33.224Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:30.304 Verification LBA range: start 0x8000 length 0x8000 00:08:30.304 Nvme2n3 : 5.36 340.73 21.30 0.00 0.00 354931.97 4133.81 371034.58 00:08:30.304 [2024-12-08T14:03:33.224Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:30.304 Verification LBA range: start 0x0 length 0x2000 00:08:30.304 Nvme3n1 : 5.43 374.71 23.42 0.00 0.00 312812.99 288.30 403298.46 00:08:30.304 [2024-12-08T14:03:33.224Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:30.304 Verification LBA range: start 0x2000 length 0x2000 00:08:30.304 Nvme3n1 : 5.37 348.74 21.80 0.00 0.00 343776.60 3302.01 330704.74 00:08:30.304 [2024-12-08T14:03:33.224Z] =================================================================================================================== 00:08:30.304 [2024-12-08T14:03:33.224Z] Total : 3966.40 247.90 0.00 0.00 368545.28 288.30 732390.01 00:08:32.836 00:08:32.836 real 0m8.639s 00:08:32.836 user 0m16.182s 00:08:32.836 sys 0m0.268s 00:08:32.836 14:03:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:32.836 14:03:35 -- common/autotest_common.sh@10 -- # set +x 00:08:32.836 ************************************ 00:08:32.836 END TEST bdev_verify_big_io 00:08:32.836 ************************************ 00:08:32.836 14:03:35 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:32.836 14:03:35 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:32.836 14:03:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:32.836 14:03:35 -- common/autotest_common.sh@10 -- # set +x 00:08:32.836 ************************************ 00:08:32.836 START TEST bdev_write_zeroes 00:08:32.836 ************************************ 00:08:32.836 14:03:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:32.836 [2024-12-08 14:03:35.345734] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:32.836 [2024-12-08 14:03:35.345843] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61137 ] 00:08:32.836 [2024-12-08 14:03:35.494610] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.836 [2024-12-08 14:03:35.660214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.401 Running I/O for 1 seconds... 00:08:34.333 00:08:34.333 Latency(us) 00:08:34.333 [2024-12-08T14:03:37.253Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:34.333 [2024-12-08T14:03:37.253Z] Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:34.333 Nvme0n1 : 1.01 11968.94 46.75 0.00 0.00 10663.36 7108.14 20366.57 00:08:34.333 [2024-12-08T14:03:37.253Z] Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:34.333 Nvme1n1 : 1.01 11955.29 46.70 0.00 0.00 10663.26 7309.78 20769.87 00:08:34.333 [2024-12-08T14:03:37.253Z] Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:34.333 Nvme2n1 : 1.02 11972.64 46.77 0.00 0.00 10636.48 6956.90 20265.75 00:08:34.333 [2024-12-08T14:03:37.253Z] Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:34.333 Nvme2n2 : 1.02 12001.62 46.88 0.00 0.00 10544.40 4789.17 20669.05 00:08:34.333 [2024-12-08T14:03:37.253Z] Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:34.333 Nvme2n3 : 1.02 11988.03 46.83 0.00 0.00 10539.71 5192.47 20769.87 00:08:34.333 [2024-12-08T14:03:37.253Z] Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:34.333 Nvme3n1 : 1.02 11974.51 46.78 0.00 0.00 10533.45 5545.35 19862.45 00:08:34.333 [2024-12-08T14:03:37.253Z] =================================================================================================================== 00:08:34.333 [2024-12-08T14:03:37.253Z] Total : 71861.02 280.71 0.00 0.00 10596.51 4789.17 20769.87 00:08:35.268 00:08:35.268 real 0m2.766s 00:08:35.268 user 0m2.461s 00:08:35.268 sys 0m0.191s 00:08:35.268 14:03:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:35.268 14:03:38 -- common/autotest_common.sh@10 -- # set +x 00:08:35.268 ************************************ 00:08:35.268 END TEST bdev_write_zeroes 00:08:35.268 ************************************ 00:08:35.268 14:03:38 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:35.268 14:03:38 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:35.268 14:03:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:35.268 14:03:38 -- common/autotest_common.sh@10 -- # set +x 00:08:35.268 ************************************ 00:08:35.268 START TEST bdev_json_nonenclosed 00:08:35.268 ************************************ 00:08:35.268 14:03:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:35.268 [2024-12-08 14:03:38.155329] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:35.268 [2024-12-08 14:03:38.155445] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61190 ] 00:08:35.526 [2024-12-08 14:03:38.298942] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.785 [2024-12-08 14:03:38.496609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.785 [2024-12-08 14:03:38.496766] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:35.785 [2024-12-08 14:03:38.496785] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:36.044 00:08:36.044 real 0m0.714s 00:08:36.044 user 0m0.514s 00:08:36.044 sys 0m0.094s 00:08:36.044 14:03:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:36.044 14:03:38 -- common/autotest_common.sh@10 -- # set +x 00:08:36.044 ************************************ 00:08:36.044 END TEST bdev_json_nonenclosed 00:08:36.044 ************************************ 00:08:36.044 14:03:38 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:36.044 14:03:38 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:36.044 14:03:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:36.044 14:03:38 -- common/autotest_common.sh@10 -- # set +x 00:08:36.044 ************************************ 00:08:36.044 START TEST bdev_json_nonarray 00:08:36.044 ************************************ 00:08:36.044 14:03:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:36.044 [2024-12-08 14:03:38.907254] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:36.044 [2024-12-08 14:03:38.907363] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61221 ] 00:08:36.302 [2024-12-08 14:03:39.055023] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.561 [2024-12-08 14:03:39.245286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.561 [2024-12-08 14:03:39.245448] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:36.561 [2024-12-08 14:03:39.245472] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:36.820 00:08:36.820 real 0m0.681s 00:08:36.820 user 0m0.473s 00:08:36.820 sys 0m0.104s 00:08:36.820 14:03:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:36.820 14:03:39 -- common/autotest_common.sh@10 -- # set +x 00:08:36.820 ************************************ 00:08:36.820 END TEST bdev_json_nonarray 00:08:36.820 ************************************ 00:08:36.820 14:03:39 -- bdev/blockdev.sh@785 -- # [[ nvme == bdev ]] 00:08:36.820 14:03:39 -- bdev/blockdev.sh@792 -- # [[ nvme == gpt ]] 00:08:36.820 14:03:39 -- bdev/blockdev.sh@796 -- # [[ nvme == crypto_sw ]] 00:08:36.820 14:03:39 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:08:36.820 14:03:39 -- bdev/blockdev.sh@809 -- # cleanup 00:08:36.820 14:03:39 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:36.820 14:03:39 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:36.820 14:03:39 -- bdev/blockdev.sh@24 -- # [[ nvme == rbd ]] 00:08:36.820 14:03:39 -- bdev/blockdev.sh@28 -- # [[ nvme == daos ]] 00:08:36.820 14:03:39 -- bdev/blockdev.sh@32 -- # [[ nvme = \g\p\t ]] 00:08:36.820 14:03:39 -- bdev/blockdev.sh@38 -- # [[ nvme == xnvme ]] 00:08:36.820 00:08:36.820 real 0m53.763s 00:08:36.820 user 1m28.136s 00:08:36.820 sys 0m5.943s 00:08:36.820 14:03:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:36.820 14:03:39 -- common/autotest_common.sh@10 -- # set +x 00:08:36.820 ************************************ 00:08:36.820 END TEST blockdev_nvme 00:08:36.820 ************************************ 00:08:36.820 14:03:39 -- spdk/autotest.sh@206 -- # uname -s 00:08:36.820 14:03:39 -- spdk/autotest.sh@206 -- # [[ Linux == Linux ]] 00:08:36.820 14:03:39 -- spdk/autotest.sh@207 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:36.820 14:03:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:36.820 14:03:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:36.820 14:03:39 -- common/autotest_common.sh@10 -- # set +x 00:08:36.820 ************************************ 00:08:36.820 START TEST blockdev_nvme_gpt 00:08:36.820 ************************************ 00:08:36.820 14:03:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:36.820 * Looking for test storage... 00:08:36.820 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:36.820 14:03:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:36.820 14:03:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:36.820 14:03:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:36.820 14:03:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:36.820 14:03:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:36.820 14:03:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:36.820 14:03:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:36.820 14:03:39 -- scripts/common.sh@335 -- # IFS=.-: 00:08:36.820 14:03:39 -- scripts/common.sh@335 -- # read -ra ver1 00:08:36.820 14:03:39 -- scripts/common.sh@336 -- # IFS=.-: 00:08:36.820 14:03:39 -- scripts/common.sh@336 -- # read -ra ver2 00:08:36.820 14:03:39 -- scripts/common.sh@337 -- # local 'op=<' 00:08:36.820 14:03:39 -- scripts/common.sh@339 -- # ver1_l=2 00:08:36.820 14:03:39 -- scripts/common.sh@340 -- # ver2_l=1 00:08:36.820 14:03:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:36.820 14:03:39 -- scripts/common.sh@343 -- # case "$op" in 00:08:36.820 14:03:39 -- scripts/common.sh@344 -- # : 1 00:08:36.820 14:03:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:36.820 14:03:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:36.820 14:03:39 -- scripts/common.sh@364 -- # decimal 1 00:08:36.820 14:03:39 -- scripts/common.sh@352 -- # local d=1 00:08:36.820 14:03:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:36.821 14:03:39 -- scripts/common.sh@354 -- # echo 1 00:08:37.079 14:03:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:37.079 14:03:39 -- scripts/common.sh@365 -- # decimal 2 00:08:37.079 14:03:39 -- scripts/common.sh@352 -- # local d=2 00:08:37.079 14:03:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:37.079 14:03:39 -- scripts/common.sh@354 -- # echo 2 00:08:37.079 14:03:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:37.079 14:03:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:37.079 14:03:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:37.079 14:03:39 -- scripts/common.sh@367 -- # return 0 00:08:37.079 14:03:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:37.079 14:03:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:37.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.079 --rc genhtml_branch_coverage=1 00:08:37.079 --rc genhtml_function_coverage=1 00:08:37.079 --rc genhtml_legend=1 00:08:37.079 --rc geninfo_all_blocks=1 00:08:37.079 --rc geninfo_unexecuted_blocks=1 00:08:37.079 00:08:37.079 ' 00:08:37.079 14:03:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:37.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.079 --rc genhtml_branch_coverage=1 00:08:37.079 --rc genhtml_function_coverage=1 00:08:37.079 --rc genhtml_legend=1 00:08:37.079 --rc geninfo_all_blocks=1 00:08:37.079 --rc geninfo_unexecuted_blocks=1 00:08:37.079 00:08:37.079 ' 00:08:37.079 14:03:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:37.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.079 --rc genhtml_branch_coverage=1 00:08:37.079 --rc genhtml_function_coverage=1 00:08:37.079 --rc genhtml_legend=1 00:08:37.079 --rc geninfo_all_blocks=1 00:08:37.079 --rc geninfo_unexecuted_blocks=1 00:08:37.079 00:08:37.079 ' 00:08:37.079 14:03:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:37.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.079 --rc genhtml_branch_coverage=1 00:08:37.079 --rc genhtml_function_coverage=1 00:08:37.079 --rc genhtml_legend=1 00:08:37.079 --rc geninfo_all_blocks=1 00:08:37.079 --rc geninfo_unexecuted_blocks=1 00:08:37.079 00:08:37.079 ' 00:08:37.079 14:03:39 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:37.079 14:03:39 -- bdev/nbd_common.sh@6 -- # set -e 00:08:37.079 14:03:39 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:37.079 14:03:39 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:37.079 14:03:39 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:37.079 14:03:39 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:37.079 14:03:39 -- bdev/blockdev.sh@18 -- # : 00:08:37.079 14:03:39 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:08:37.079 14:03:39 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:08:37.079 14:03:39 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:08:37.079 14:03:39 -- bdev/blockdev.sh@672 -- # uname -s 00:08:37.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.079 14:03:39 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:08:37.079 14:03:39 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:08:37.079 14:03:39 -- bdev/blockdev.sh@680 -- # test_type=gpt 00:08:37.079 14:03:39 -- bdev/blockdev.sh@681 -- # crypto_device= 00:08:37.079 14:03:39 -- bdev/blockdev.sh@682 -- # dek= 00:08:37.079 14:03:39 -- bdev/blockdev.sh@683 -- # env_ctx= 00:08:37.079 14:03:39 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:08:37.079 14:03:39 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:08:37.079 14:03:39 -- bdev/blockdev.sh@688 -- # [[ gpt == bdev ]] 00:08:37.079 14:03:39 -- bdev/blockdev.sh@688 -- # [[ gpt == crypto_* ]] 00:08:37.079 14:03:39 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:08:37.079 14:03:39 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=61304 00:08:37.079 14:03:39 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:37.079 14:03:39 -- bdev/blockdev.sh@47 -- # waitforlisten 61304 00:08:37.079 14:03:39 -- common/autotest_common.sh@829 -- # '[' -z 61304 ']' 00:08:37.079 14:03:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.079 14:03:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:37.079 14:03:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.079 14:03:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:37.079 14:03:39 -- common/autotest_common.sh@10 -- # set +x 00:08:37.079 14:03:39 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:37.079 [2024-12-08 14:03:39.819254] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:37.079 [2024-12-08 14:03:39.819364] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61304 ] 00:08:37.079 [2024-12-08 14:03:39.967155] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.337 [2024-12-08 14:03:40.170673] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:37.337 [2024-12-08 14:03:40.170892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.707 14:03:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:38.707 14:03:41 -- common/autotest_common.sh@862 -- # return 0 00:08:38.707 14:03:41 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:08:38.707 14:03:41 -- bdev/blockdev.sh@700 -- # setup_gpt_conf 00:08:38.707 14:03:41 -- bdev/blockdev.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:38.963 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:38.963 Waiting for block devices as requested 00:08:38.963 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:08:38.963 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:08:39.220 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:08:39.220 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:08:44.477 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:08:44.477 14:03:47 -- bdev/blockdev.sh@103 -- # get_zoned_devs 00:08:44.477 14:03:47 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:08:44.477 14:03:47 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:08:44.477 14:03:47 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:08:44.477 14:03:47 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:44.477 14:03:47 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0c0n1 00:08:44.477 14:03:47 -- common/autotest_common.sh@1657 -- # local device=nvme0c0n1 00:08:44.477 14:03:47 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:08:44.477 14:03:47 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:44.477 14:03:47 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:44.477 14:03:47 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:08:44.477 14:03:47 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:08:44.477 14:03:47 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:44.477 14:03:47 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:44.477 14:03:47 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:44.477 14:03:47 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:08:44.477 14:03:47 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:08:44.477 14:03:47 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:44.477 14:03:47 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:44.477 14:03:47 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:44.477 14:03:47 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:08:44.477 14:03:47 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:08:44.477 14:03:47 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:08:44.477 14:03:47 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:44.477 14:03:47 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:44.477 14:03:47 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:08:44.477 14:03:47 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:08:44.477 14:03:47 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:08:44.477 14:03:47 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:44.477 14:03:47 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:44.477 14:03:47 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:08:44.477 14:03:47 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:08:44.477 14:03:47 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:08:44.477 14:03:47 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:44.477 14:03:47 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:44.477 14:03:47 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:08:44.477 14:03:47 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:08:44.477 14:03:47 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:08:44.477 14:03:47 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:44.477 14:03:47 -- bdev/blockdev.sh@105 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:06.0/nvme/nvme2/nvme2n1' '/sys/bus/pci/drivers/nvme/0000:00:07.0/nvme/nvme3/nvme3n1' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n1' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n2' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n3' '/sys/bus/pci/drivers/nvme/0000:00:09.0/nvme/nvme0/nvme0c0n1') 00:08:44.477 14:03:47 -- bdev/blockdev.sh@105 -- # local nvme_devs nvme_dev 00:08:44.477 14:03:47 -- bdev/blockdev.sh@106 -- # gpt_nvme= 00:08:44.477 14:03:47 -- bdev/blockdev.sh@108 -- # for nvme_dev in "${nvme_devs[@]}" 00:08:44.477 14:03:47 -- bdev/blockdev.sh@109 -- # [[ -z '' ]] 00:08:44.477 14:03:47 -- bdev/blockdev.sh@110 -- # dev=/dev/nvme2n1 00:08:44.477 14:03:47 -- bdev/blockdev.sh@111 -- # parted /dev/nvme2n1 -ms print 00:08:44.477 14:03:47 -- bdev/blockdev.sh@111 -- # pt='Error: /dev/nvme2n1: unrecognised disk label 00:08:44.477 BYT; 00:08:44.477 /dev/nvme2n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:08:44.477 14:03:47 -- bdev/blockdev.sh@112 -- # [[ Error: /dev/nvme2n1: unrecognised disk label 00:08:44.477 BYT; 00:08:44.477 /dev/nvme2n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\2\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:08:44.477 14:03:47 -- bdev/blockdev.sh@113 -- # gpt_nvme=/dev/nvme2n1 00:08:44.477 14:03:47 -- bdev/blockdev.sh@114 -- # break 00:08:44.477 14:03:47 -- bdev/blockdev.sh@117 -- # [[ -n /dev/nvme2n1 ]] 00:08:44.477 14:03:47 -- bdev/blockdev.sh@122 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:08:44.477 14:03:47 -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:44.477 14:03:47 -- bdev/blockdev.sh@126 -- # parted -s /dev/nvme2n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:08:44.477 14:03:47 -- bdev/blockdev.sh@128 -- # get_spdk_gpt_old 00:08:44.477 14:03:47 -- scripts/common.sh@410 -- # local spdk_guid 00:08:44.477 14:03:47 -- scripts/common.sh@412 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:44.477 14:03:47 -- scripts/common.sh@414 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:44.477 14:03:47 -- scripts/common.sh@415 -- # IFS='()' 00:08:44.477 14:03:47 -- scripts/common.sh@415 -- # read -r _ spdk_guid _ 00:08:44.477 14:03:47 -- scripts/common.sh@415 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:44.477 14:03:47 -- scripts/common.sh@416 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:08:44.477 14:03:47 -- scripts/common.sh@416 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:44.477 14:03:47 -- scripts/common.sh@418 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:44.477 14:03:47 -- bdev/blockdev.sh@128 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:44.477 14:03:47 -- bdev/blockdev.sh@129 -- # get_spdk_gpt 00:08:44.477 14:03:47 -- scripts/common.sh@422 -- # local spdk_guid 00:08:44.477 14:03:47 -- scripts/common.sh@424 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:44.477 14:03:47 -- scripts/common.sh@426 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:44.477 14:03:47 -- scripts/common.sh@427 -- # IFS='()' 00:08:44.477 14:03:47 -- scripts/common.sh@427 -- # read -r _ spdk_guid _ 00:08:44.477 14:03:47 -- scripts/common.sh@427 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:44.477 14:03:47 -- scripts/common.sh@428 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:08:44.477 14:03:47 -- scripts/common.sh@428 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:44.477 14:03:47 -- scripts/common.sh@430 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:44.477 14:03:47 -- bdev/blockdev.sh@129 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:44.477 14:03:47 -- bdev/blockdev.sh@130 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme2n1 00:08:45.409 The operation has completed successfully. 00:08:45.409 14:03:48 -- bdev/blockdev.sh@131 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme2n1 00:08:46.341 The operation has completed successfully. 00:08:46.341 14:03:49 -- bdev/blockdev.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:47.334 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:47.334 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:08:47.334 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:08:47.334 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:08:47.334 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:08:47.334 14:03:50 -- bdev/blockdev.sh@133 -- # rpc_cmd bdev_get_bdevs 00:08:47.334 14:03:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.334 14:03:50 -- common/autotest_common.sh@10 -- # set +x 00:08:47.334 [] 00:08:47.334 14:03:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.334 14:03:50 -- bdev/blockdev.sh@134 -- # setup_nvme_conf 00:08:47.334 14:03:50 -- bdev/blockdev.sh@79 -- # local json 00:08:47.334 14:03:50 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:08:47.334 14:03:50 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:47.334 14:03:50 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:07.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:08.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:09.0" } } ] }'\''' 00:08:47.334 14:03:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.334 14:03:50 -- common/autotest_common.sh@10 -- # set +x 00:08:47.591 14:03:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.591 14:03:50 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:08:47.591 14:03:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.591 14:03:50 -- common/autotest_common.sh@10 -- # set +x 00:08:47.591 14:03:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.591 14:03:50 -- bdev/blockdev.sh@738 -- # cat 00:08:47.591 14:03:50 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:08:47.591 14:03:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.591 14:03:50 -- common/autotest_common.sh@10 -- # set +x 00:08:47.591 14:03:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.592 14:03:50 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:08:47.592 14:03:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.592 14:03:50 -- common/autotest_common.sh@10 -- # set +x 00:08:47.592 14:03:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.592 14:03:50 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:47.592 14:03:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.592 14:03:50 -- common/autotest_common.sh@10 -- # set +x 00:08:47.592 14:03:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.851 14:03:50 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:08:47.851 14:03:50 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:08:47.851 14:03:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.851 14:03:50 -- common/autotest_common.sh@10 -- # set +x 00:08:47.851 14:03:50 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:08:47.851 14:03:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.851 14:03:50 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:08:47.851 14:03:50 -- bdev/blockdev.sh@747 -- # jq -r .name 00:08:47.851 14:03:50 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774144,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774143,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 774400,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "206d9182-e7b1-4739-9a6b-a5a1eacb1a77"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "206d9182-e7b1-4739-9a6b-a5a1eacb1a77",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:07.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:07.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "cc86a267-5bd9-4a18-9ac3-d3381a07f20a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "cc86a267-5bd9-4a18-9ac3-d3381a07f20a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "f9f44f6f-e68a-47c2-a8b1-6488909d0eec"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f9f44f6f-e68a-47c2-a8b1-6488909d0eec",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "0253f1bc-6679-4ecf-9098-ef2833efadff"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0253f1bc-6679-4ecf-9098-ef2833efadff",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "80ad9c06-4e5b-4817-a553-7464d9a9204f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "80ad9c06-4e5b-4817-a553-7464d9a9204f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:09.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:09.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:47.851 14:03:50 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:08:47.851 14:03:50 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1p1 00:08:47.851 14:03:50 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:08:47.851 14:03:50 -- bdev/blockdev.sh@752 -- # killprocess 61304 00:08:47.851 14:03:50 -- common/autotest_common.sh@936 -- # '[' -z 61304 ']' 00:08:47.851 14:03:50 -- common/autotest_common.sh@940 -- # kill -0 61304 00:08:47.851 14:03:50 -- common/autotest_common.sh@941 -- # uname 00:08:47.851 14:03:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:47.851 14:03:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61304 00:08:47.851 killing process with pid 61304 00:08:47.851 14:03:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:47.851 14:03:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:47.851 14:03:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61304' 00:08:47.851 14:03:50 -- common/autotest_common.sh@955 -- # kill 61304 00:08:47.852 14:03:50 -- common/autotest_common.sh@960 -- # wait 61304 00:08:49.224 14:03:51 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:49.224 14:03:51 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:08:49.224 14:03:51 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:08:49.224 14:03:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:49.224 14:03:51 -- common/autotest_common.sh@10 -- # set +x 00:08:49.224 ************************************ 00:08:49.224 START TEST bdev_hello_world 00:08:49.224 ************************************ 00:08:49.224 14:03:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:08:49.224 [2024-12-08 14:03:51.942874] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:49.224 [2024-12-08 14:03:51.942994] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61948 ] 00:08:49.224 [2024-12-08 14:03:52.091833] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.482 [2024-12-08 14:03:52.257689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.047 [2024-12-08 14:03:52.760581] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:50.047 [2024-12-08 14:03:52.760629] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:08:50.047 [2024-12-08 14:03:52.760647] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:50.047 [2024-12-08 14:03:52.763123] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:50.047 [2024-12-08 14:03:52.763590] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:50.047 [2024-12-08 14:03:52.763619] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:50.047 [2024-12-08 14:03:52.763772] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:50.047 00:08:50.047 [2024-12-08 14:03:52.763789] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:50.979 00:08:50.979 real 0m1.669s 00:08:50.979 user 0m1.375s 00:08:50.979 sys 0m0.186s 00:08:50.979 14:03:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:50.979 14:03:53 -- common/autotest_common.sh@10 -- # set +x 00:08:50.979 ************************************ 00:08:50.979 END TEST bdev_hello_world 00:08:50.979 ************************************ 00:08:50.979 14:03:53 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:08:50.979 14:03:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:50.979 14:03:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:50.979 14:03:53 -- common/autotest_common.sh@10 -- # set +x 00:08:50.979 ************************************ 00:08:50.979 START TEST bdev_bounds 00:08:50.979 ************************************ 00:08:50.979 14:03:53 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:08:50.979 14:03:53 -- bdev/blockdev.sh@288 -- # bdevio_pid=61989 00:08:50.979 14:03:53 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:50.979 Process bdevio pid: 61989 00:08:50.979 14:03:53 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 61989' 00:08:50.979 14:03:53 -- bdev/blockdev.sh@291 -- # waitforlisten 61989 00:08:50.979 14:03:53 -- common/autotest_common.sh@829 -- # '[' -z 61989 ']' 00:08:50.979 14:03:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.979 14:03:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:50.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.979 14:03:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.979 14:03:53 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:50.979 14:03:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:50.979 14:03:53 -- common/autotest_common.sh@10 -- # set +x 00:08:50.979 [2024-12-08 14:03:53.664035] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:50.980 [2024-12-08 14:03:53.664146] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61989 ] 00:08:50.980 [2024-12-08 14:03:53.811795] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:51.237 [2024-12-08 14:03:53.976130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:51.237 [2024-12-08 14:03:53.976534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.237 [2024-12-08 14:03:53.976568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:52.610 14:03:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:52.610 14:03:55 -- common/autotest_common.sh@862 -- # return 0 00:08:52.610 14:03:55 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:52.610 I/O targets: 00:08:52.610 Nvme0n1p1: 774144 blocks of 4096 bytes (3024 MiB) 00:08:52.610 Nvme0n1p2: 774143 blocks of 4096 bytes (3024 MiB) 00:08:52.610 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:08:52.610 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:52.610 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:52.610 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:52.610 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:52.610 00:08:52.610 00:08:52.610 CUnit - A unit testing framework for C - Version 2.1-3 00:08:52.610 http://cunit.sourceforge.net/ 00:08:52.610 00:08:52.610 00:08:52.610 Suite: bdevio tests on: Nvme3n1 00:08:52.610 Test: blockdev write read block ...passed 00:08:52.610 Test: blockdev write zeroes read block ...passed 00:08:52.610 Test: blockdev write zeroes read no split ...passed 00:08:52.610 Test: blockdev write zeroes read split ...passed 00:08:52.610 Test: blockdev write zeroes read split partial ...passed 00:08:52.610 Test: blockdev reset ...[2024-12-08 14:03:55.271339] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:08:52.610 [2024-12-08 14:03:55.274009] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:52.610 passed 00:08:52.610 Test: blockdev write read 8 blocks ...passed 00:08:52.610 Test: blockdev write read size > 128k ...passed 00:08:52.610 Test: blockdev write read invalid size ...passed 00:08:52.610 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:52.610 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:52.610 Test: blockdev write read max offset ...passed 00:08:52.610 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:52.610 Test: blockdev writev readv 8 blocks ...passed 00:08:52.610 Test: blockdev writev readv 30 x 1block ...passed 00:08:52.610 Test: blockdev writev readv block ...passed 00:08:52.610 Test: blockdev writev readv size > 128k ...passed 00:08:52.610 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:52.610 Test: blockdev comparev and writev ...[2024-12-08 14:03:55.282056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27600a000 len:0x1000 00:08:52.610 [2024-12-08 14:03:55.282181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:52.610 passed 00:08:52.610 Test: blockdev nvme passthru rw ...passed 00:08:52.610 Test: blockdev nvme passthru vendor specific ...[2024-12-08 14:03:55.283067] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:52.610 [2024-12-08 14:03:55.283155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:08:52.610 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:08:52.610 passed 00:08:52.610 Test: blockdev copy ...passed 00:08:52.610 Suite: bdevio tests on: Nvme2n3 00:08:52.610 Test: blockdev write read block ...passed 00:08:52.610 Test: blockdev write zeroes read block ...passed 00:08:52.610 Test: blockdev write zeroes read no split ...passed 00:08:52.610 Test: blockdev write zeroes read split ...passed 00:08:52.610 Test: blockdev write zeroes read split partial ...passed 00:08:52.610 Test: blockdev reset ...[2024-12-08 14:03:55.342184] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:08:52.610 [2024-12-08 14:03:55.345137] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:52.610 passed 00:08:52.610 Test: blockdev write read 8 blocks ...passed 00:08:52.610 Test: blockdev write read size > 128k ...passed 00:08:52.610 Test: blockdev write read invalid size ...passed 00:08:52.610 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:52.610 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:52.610 Test: blockdev write read max offset ...passed 00:08:52.610 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:52.610 Test: blockdev writev readv 8 blocks ...passed 00:08:52.610 Test: blockdev writev readv 30 x 1block ...passed 00:08:52.610 Test: blockdev writev readv block ...passed 00:08:52.610 Test: blockdev writev readv size > 128k ...passed 00:08:52.610 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:52.610 Test: blockdev comparev and writev ...[2024-12-08 14:03:55.352435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26c104000 len:0x1000 00:08:52.610 [2024-12-08 14:03:55.352549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:52.610 passed 00:08:52.610 Test: blockdev nvme passthru rw ...passed 00:08:52.610 Test: blockdev nvme passthru vendor specific ...[2024-12-08 14:03:55.353711] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:08:52.610 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:08:52.610 [2024-12-08 14:03:55.353787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:52.610 passed 00:08:52.610 Test: blockdev copy ...passed 00:08:52.610 Suite: bdevio tests on: Nvme2n2 00:08:52.610 Test: blockdev write read block ...passed 00:08:52.610 Test: blockdev write zeroes read block ...passed 00:08:52.610 Test: blockdev write zeroes read no split ...passed 00:08:52.610 Test: blockdev write zeroes read split ...passed 00:08:52.610 Test: blockdev write zeroes read split partial ...passed 00:08:52.610 Test: blockdev reset ...[2024-12-08 14:03:55.426924] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:08:52.610 [2024-12-08 14:03:55.429740] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:52.610 passed 00:08:52.610 Test: blockdev write read 8 blocks ...passed 00:08:52.610 Test: blockdev write read size > 128k ...passed 00:08:52.610 Test: blockdev write read invalid size ...passed 00:08:52.611 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:52.611 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:52.611 Test: blockdev write read max offset ...passed 00:08:52.611 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:52.611 Test: blockdev writev readv 8 blocks ...passed 00:08:52.611 Test: blockdev writev readv 30 x 1block ...passed 00:08:52.611 Test: blockdev writev readv block ...passed 00:08:52.611 Test: blockdev writev readv size > 128k ...passed 00:08:52.611 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:52.611 Test: blockdev comparev and writev ...[2024-12-08 14:03:55.437248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26c104000 len:0x1000 00:08:52.611 [2024-12-08 14:03:55.437290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:52.611 passed 00:08:52.611 Test: blockdev nvme passthru rw ...passed 00:08:52.611 Test: blockdev nvme passthru vendor specific ...[2024-12-08 14:03:55.437946] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:52.611 [2024-12-08 14:03:55.437970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:52.611 passed 00:08:52.611 Test: blockdev nvme admin passthru ...passed 00:08:52.611 Test: blockdev copy ...passed 00:08:52.611 Suite: bdevio tests on: Nvme2n1 00:08:52.611 Test: blockdev write read block ...passed 00:08:52.611 Test: blockdev write zeroes read block ...passed 00:08:52.611 Test: blockdev write zeroes read no split ...passed 00:08:52.611 Test: blockdev write zeroes read split ...passed 00:08:52.611 Test: blockdev write zeroes read split partial ...passed 00:08:52.611 Test: blockdev reset ...[2024-12-08 14:03:55.496257] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:08:52.611 [2024-12-08 14:03:55.498868] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:52.611 passed 00:08:52.611 Test: blockdev write read 8 blocks ...passed 00:08:52.611 Test: blockdev write read size > 128k ...passed 00:08:52.611 Test: blockdev write read invalid size ...passed 00:08:52.611 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:52.611 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:52.611 Test: blockdev write read max offset ...passed 00:08:52.611 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:52.611 Test: blockdev writev readv 8 blocks ...passed 00:08:52.611 Test: blockdev writev readv 30 x 1block ...passed 00:08:52.611 Test: blockdev writev readv block ...passed 00:08:52.611 Test: blockdev writev readv size > 128k ...passed 00:08:52.611 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:52.611 Test: blockdev comparev and writev ...[2024-12-08 14:03:55.506301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27b43c000 len:0x1000 00:08:52.611 [2024-12-08 14:03:55.506343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:52.611 passed 00:08:52.611 Test: blockdev nvme passthru rw ...passed 00:08:52.611 Test: blockdev nvme passthru vendor specific ...[2024-12-08 14:03:55.506992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:52.611 [2024-12-08 14:03:55.507017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:52.611 passed 00:08:52.611 Test: blockdev nvme admin passthru ...passed 00:08:52.611 Test: blockdev copy ...passed 00:08:52.611 Suite: bdevio tests on: Nvme1n1 00:08:52.611 Test: blockdev write read block ...passed 00:08:52.611 Test: blockdev write zeroes read block ...passed 00:08:52.611 Test: blockdev write zeroes read no split ...passed 00:08:52.868 Test: blockdev write zeroes read split ...passed 00:08:52.868 Test: blockdev write zeroes read split partial ...passed 00:08:52.868 Test: blockdev reset ...[2024-12-08 14:03:55.565486] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:08:52.868 [2024-12-08 14:03:55.567932] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:52.868 passed 00:08:52.868 Test: blockdev write read 8 blocks ...passed 00:08:52.868 Test: blockdev write read size > 128k ...passed 00:08:52.868 Test: blockdev write read invalid size ...passed 00:08:52.868 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:52.868 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:52.868 Test: blockdev write read max offset ...passed 00:08:52.868 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:52.868 Test: blockdev writev readv 8 blocks ...passed 00:08:52.868 Test: blockdev writev readv 30 x 1block ...passed 00:08:52.868 Test: blockdev writev readv block ...passed 00:08:52.868 Test: blockdev writev readv size > 128k ...passed 00:08:52.868 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:52.868 Test: blockdev comparev and writev ...[2024-12-08 14:03:55.575536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27b438000 len:0x1000 00:08:52.868 [2024-12-08 14:03:55.575577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:52.868 passed 00:08:52.868 Test: blockdev nvme passthru rw ...passed 00:08:52.868 Test: blockdev nvme passthru vendor specific ...[2024-12-08 14:03:55.576386] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:52.868 [2024-12-08 14:03:55.576412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:52.868 passed 00:08:52.868 Test: blockdev nvme admin passthru ...passed 00:08:52.868 Test: blockdev copy ...passed 00:08:52.868 Suite: bdevio tests on: Nvme0n1p2 00:08:52.868 Test: blockdev write read block ...passed 00:08:52.868 Test: blockdev write zeroes read block ...passed 00:08:52.868 Test: blockdev write zeroes read no split ...passed 00:08:52.868 Test: blockdev write zeroes read split ...passed 00:08:52.868 Test: blockdev write zeroes read split partial ...passed 00:08:52.868 Test: blockdev reset ...[2024-12-08 14:03:55.634974] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:08:52.868 [2024-12-08 14:03:55.637407] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:52.868 passed 00:08:52.868 Test: blockdev write read 8 blocks ...passed 00:08:52.868 Test: blockdev write read size > 128k ...passed 00:08:52.868 Test: blockdev write read invalid size ...passed 00:08:52.869 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:52.869 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:52.869 Test: blockdev write read max offset ...passed 00:08:52.869 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:52.869 Test: blockdev writev readv 8 blocks ...passed 00:08:52.869 Test: blockdev writev readv 30 x 1block ...passed 00:08:52.869 Test: blockdev writev readv block ...passed 00:08:52.869 Test: blockdev writev readv size > 128k ...passed 00:08:52.869 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:52.869 Test: blockdev comparev and writev ...[2024-12-08 14:03:55.644361] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p2 since it has 00:08:52.869 separate metadata which is not supported yet. 00:08:52.869 passed 00:08:52.869 Test: blockdev nvme passthru rw ...passed 00:08:52.869 Test: blockdev nvme passthru vendor specific ...passed 00:08:52.869 Test: blockdev nvme admin passthru ...passed 00:08:52.869 Test: blockdev copy ...passed 00:08:52.869 Suite: bdevio tests on: Nvme0n1p1 00:08:52.869 Test: blockdev write read block ...passed 00:08:52.869 Test: blockdev write zeroes read block ...passed 00:08:52.869 Test: blockdev write zeroes read no split ...passed 00:08:52.869 Test: blockdev write zeroes read split ...passed 00:08:52.869 Test: blockdev write zeroes read split partial ...passed 00:08:52.869 Test: blockdev reset ...[2024-12-08 14:03:55.689173] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:08:52.869 [2024-12-08 14:03:55.691566] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:52.869 passed 00:08:52.869 Test: blockdev write read 8 blocks ...passed 00:08:52.869 Test: blockdev write read size > 128k ...passed 00:08:52.869 Test: blockdev write read invalid size ...passed 00:08:52.869 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:52.869 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:52.869 Test: blockdev write read max offset ...passed 00:08:52.869 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:52.869 Test: blockdev writev readv 8 blocks ...passed 00:08:52.869 Test: blockdev writev readv 30 x 1block ...passed 00:08:52.869 Test: blockdev writev readv block ...passed 00:08:52.869 Test: blockdev writev readv size > 128k ...passed 00:08:52.869 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:52.869 Test: blockdev comparev and writev ...[2024-12-08 14:03:55.698461] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p1 since it has 00:08:52.869 separate metadata which is not supported yet. 00:08:52.869 passed 00:08:52.869 Test: blockdev nvme passthru rw ...passed 00:08:52.869 Test: blockdev nvme passthru vendor specific ...passed 00:08:52.869 Test: blockdev nvme admin passthru ...passed 00:08:52.869 Test: blockdev copy ...passed 00:08:52.869 00:08:52.869 Run Summary: Type Total Ran Passed Failed Inactive 00:08:52.869 suites 7 7 n/a 0 0 00:08:52.869 tests 161 161 161 0 0 00:08:52.869 asserts 1006 1006 1006 0 n/a 00:08:52.869 00:08:52.869 Elapsed time = 1.278 seconds 00:08:52.869 0 00:08:52.869 14:03:55 -- bdev/blockdev.sh@293 -- # killprocess 61989 00:08:52.869 14:03:55 -- common/autotest_common.sh@936 -- # '[' -z 61989 ']' 00:08:52.869 14:03:55 -- common/autotest_common.sh@940 -- # kill -0 61989 00:08:52.869 14:03:55 -- common/autotest_common.sh@941 -- # uname 00:08:52.869 14:03:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:52.869 14:03:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61989 00:08:52.869 14:03:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:52.869 killing process with pid 61989 00:08:52.869 14:03:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:52.869 14:03:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61989' 00:08:52.869 14:03:55 -- common/autotest_common.sh@955 -- # kill 61989 00:08:52.869 14:03:55 -- common/autotest_common.sh@960 -- # wait 61989 00:08:53.441 14:03:56 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:08:53.441 00:08:53.441 real 0m2.718s 00:08:53.441 user 0m7.090s 00:08:53.441 sys 0m0.305s 00:08:53.441 14:03:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:53.441 14:03:56 -- common/autotest_common.sh@10 -- # set +x 00:08:53.441 ************************************ 00:08:53.441 END TEST bdev_bounds 00:08:53.441 ************************************ 00:08:53.698 14:03:56 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:53.698 14:03:56 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:08:53.698 14:03:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:53.698 14:03:56 -- common/autotest_common.sh@10 -- # set +x 00:08:53.698 ************************************ 00:08:53.698 START TEST bdev_nbd 00:08:53.698 ************************************ 00:08:53.698 14:03:56 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:53.698 14:03:56 -- bdev/blockdev.sh@298 -- # uname -s 00:08:53.698 14:03:56 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:08:53.698 14:03:56 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:53.698 14:03:56 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:53.698 14:03:56 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:53.698 14:03:56 -- bdev/blockdev.sh@302 -- # local bdev_all 00:08:53.698 14:03:56 -- bdev/blockdev.sh@303 -- # local bdev_num=7 00:08:53.698 14:03:56 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:08:53.698 14:03:56 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:53.698 14:03:56 -- bdev/blockdev.sh@309 -- # local nbd_all 00:08:53.698 14:03:56 -- bdev/blockdev.sh@310 -- # bdev_num=7 00:08:53.698 14:03:56 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:53.698 14:03:56 -- bdev/blockdev.sh@312 -- # local nbd_list 00:08:53.698 14:03:56 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:53.699 14:03:56 -- bdev/blockdev.sh@313 -- # local bdev_list 00:08:53.699 14:03:56 -- bdev/blockdev.sh@316 -- # nbd_pid=62051 00:08:53.699 14:03:56 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:53.699 14:03:56 -- bdev/blockdev.sh@318 -- # waitforlisten 62051 /var/tmp/spdk-nbd.sock 00:08:53.699 14:03:56 -- common/autotest_common.sh@829 -- # '[' -z 62051 ']' 00:08:53.699 14:03:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:53.699 14:03:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:53.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:53.699 14:03:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:53.699 14:03:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:53.699 14:03:56 -- common/autotest_common.sh@10 -- # set +x 00:08:53.699 14:03:56 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:53.699 [2024-12-08 14:03:56.441022] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:53.699 [2024-12-08 14:03:56.441139] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:53.699 [2024-12-08 14:03:56.590971] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.955 [2024-12-08 14:03:56.756970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.322 14:03:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:55.322 14:03:57 -- common/autotest_common.sh@862 -- # return 0 00:08:55.322 14:03:57 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:55.322 14:03:57 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:55.322 14:03:57 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:55.322 14:03:57 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:55.322 14:03:57 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:55.322 14:03:57 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:55.322 14:03:57 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:55.322 14:03:57 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:55.322 14:03:57 -- bdev/nbd_common.sh@24 -- # local i 00:08:55.322 14:03:57 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:55.322 14:03:57 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:55.322 14:03:57 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:55.322 14:03:57 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:08:55.322 14:03:58 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:55.322 14:03:58 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:55.322 14:03:58 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:55.322 14:03:58 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:55.322 14:03:58 -- common/autotest_common.sh@867 -- # local i 00:08:55.322 14:03:58 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:55.322 14:03:58 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:55.322 14:03:58 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:55.322 14:03:58 -- common/autotest_common.sh@871 -- # break 00:08:55.322 14:03:58 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:55.322 14:03:58 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:55.322 14:03:58 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:55.322 1+0 records in 00:08:55.322 1+0 records out 00:08:55.322 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00041433 s, 9.9 MB/s 00:08:55.322 14:03:58 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:55.322 14:03:58 -- common/autotest_common.sh@884 -- # size=4096 00:08:55.322 14:03:58 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:55.322 14:03:58 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:55.322 14:03:58 -- common/autotest_common.sh@887 -- # return 0 00:08:55.322 14:03:58 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:55.322 14:03:58 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:55.322 14:03:58 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:08:55.578 14:03:58 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:55.578 14:03:58 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:55.578 14:03:58 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:55.578 14:03:58 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:55.578 14:03:58 -- common/autotest_common.sh@867 -- # local i 00:08:55.578 14:03:58 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:55.578 14:03:58 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:55.578 14:03:58 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:55.578 14:03:58 -- common/autotest_common.sh@871 -- # break 00:08:55.578 14:03:58 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:55.578 14:03:58 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:55.578 14:03:58 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:55.578 1+0 records in 00:08:55.578 1+0 records out 00:08:55.578 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000452733 s, 9.0 MB/s 00:08:55.579 14:03:58 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:55.579 14:03:58 -- common/autotest_common.sh@884 -- # size=4096 00:08:55.579 14:03:58 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:55.579 14:03:58 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:55.579 14:03:58 -- common/autotest_common.sh@887 -- # return 0 00:08:55.579 14:03:58 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:55.579 14:03:58 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:55.579 14:03:58 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:08:55.835 14:03:58 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:55.835 14:03:58 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:55.835 14:03:58 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:55.835 14:03:58 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:08:55.835 14:03:58 -- common/autotest_common.sh@867 -- # local i 00:08:55.835 14:03:58 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:55.835 14:03:58 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:55.835 14:03:58 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:08:55.835 14:03:58 -- common/autotest_common.sh@871 -- # break 00:08:55.835 14:03:58 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:55.835 14:03:58 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:55.835 14:03:58 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:55.835 1+0 records in 00:08:55.835 1+0 records out 00:08:55.835 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000363534 s, 11.3 MB/s 00:08:55.835 14:03:58 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:55.835 14:03:58 -- common/autotest_common.sh@884 -- # size=4096 00:08:55.835 14:03:58 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:55.835 14:03:58 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:55.835 14:03:58 -- common/autotest_common.sh@887 -- # return 0 00:08:55.835 14:03:58 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:55.835 14:03:58 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:55.835 14:03:58 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:56.092 14:03:58 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:56.092 14:03:58 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:56.092 14:03:58 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:56.092 14:03:58 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:08:56.092 14:03:58 -- common/autotest_common.sh@867 -- # local i 00:08:56.092 14:03:58 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:56.092 14:03:58 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:56.092 14:03:58 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:08:56.092 14:03:58 -- common/autotest_common.sh@871 -- # break 00:08:56.092 14:03:58 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:56.092 14:03:58 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:56.092 14:03:58 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:56.092 1+0 records in 00:08:56.092 1+0 records out 00:08:56.092 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00042045 s, 9.7 MB/s 00:08:56.092 14:03:58 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:56.092 14:03:58 -- common/autotest_common.sh@884 -- # size=4096 00:08:56.092 14:03:58 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:56.092 14:03:58 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:56.092 14:03:58 -- common/autotest_common.sh@887 -- # return 0 00:08:56.092 14:03:58 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:56.092 14:03:58 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:56.092 14:03:58 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:56.092 14:03:58 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:56.092 14:03:58 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:56.092 14:03:58 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:56.092 14:03:58 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:08:56.092 14:03:58 -- common/autotest_common.sh@867 -- # local i 00:08:56.092 14:03:58 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:56.092 14:03:58 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:56.092 14:03:58 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:08:56.092 14:03:58 -- common/autotest_common.sh@871 -- # break 00:08:56.092 14:03:58 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:56.092 14:03:58 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:56.092 14:03:58 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:56.092 1+0 records in 00:08:56.092 1+0 records out 00:08:56.092 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000405637 s, 10.1 MB/s 00:08:56.092 14:03:59 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:56.092 14:03:59 -- common/autotest_common.sh@884 -- # size=4096 00:08:56.093 14:03:59 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:56.093 14:03:59 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:56.093 14:03:59 -- common/autotest_common.sh@887 -- # return 0 00:08:56.093 14:03:59 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:56.093 14:03:59 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:56.093 14:03:59 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:56.349 14:03:59 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:56.349 14:03:59 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:56.349 14:03:59 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:56.349 14:03:59 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:08:56.349 14:03:59 -- common/autotest_common.sh@867 -- # local i 00:08:56.349 14:03:59 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:56.349 14:03:59 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:56.349 14:03:59 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:08:56.349 14:03:59 -- common/autotest_common.sh@871 -- # break 00:08:56.349 14:03:59 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:56.349 14:03:59 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:56.349 14:03:59 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:56.349 1+0 records in 00:08:56.349 1+0 records out 00:08:56.349 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000545753 s, 7.5 MB/s 00:08:56.349 14:03:59 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:56.349 14:03:59 -- common/autotest_common.sh@884 -- # size=4096 00:08:56.349 14:03:59 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:56.349 14:03:59 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:56.349 14:03:59 -- common/autotest_common.sh@887 -- # return 0 00:08:56.349 14:03:59 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:56.349 14:03:59 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:56.349 14:03:59 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:56.606 14:03:59 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:08:56.606 14:03:59 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:08:56.606 14:03:59 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:08:56.606 14:03:59 -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:08:56.606 14:03:59 -- common/autotest_common.sh@867 -- # local i 00:08:56.606 14:03:59 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:56.606 14:03:59 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:56.606 14:03:59 -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:08:56.606 14:03:59 -- common/autotest_common.sh@871 -- # break 00:08:56.606 14:03:59 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:56.606 14:03:59 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:56.606 14:03:59 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:56.606 1+0 records in 00:08:56.606 1+0 records out 00:08:56.606 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000564691 s, 7.3 MB/s 00:08:56.606 14:03:59 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:56.606 14:03:59 -- common/autotest_common.sh@884 -- # size=4096 00:08:56.606 14:03:59 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:56.606 14:03:59 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:56.606 14:03:59 -- common/autotest_common.sh@887 -- # return 0 00:08:56.606 14:03:59 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:56.606 14:03:59 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:56.606 14:03:59 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:56.864 14:03:59 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:56.864 { 00:08:56.864 "nbd_device": "/dev/nbd0", 00:08:56.864 "bdev_name": "Nvme0n1p1" 00:08:56.864 }, 00:08:56.864 { 00:08:56.864 "nbd_device": "/dev/nbd1", 00:08:56.864 "bdev_name": "Nvme0n1p2" 00:08:56.864 }, 00:08:56.864 { 00:08:56.864 "nbd_device": "/dev/nbd2", 00:08:56.864 "bdev_name": "Nvme1n1" 00:08:56.864 }, 00:08:56.864 { 00:08:56.864 "nbd_device": "/dev/nbd3", 00:08:56.864 "bdev_name": "Nvme2n1" 00:08:56.864 }, 00:08:56.864 { 00:08:56.864 "nbd_device": "/dev/nbd4", 00:08:56.864 "bdev_name": "Nvme2n2" 00:08:56.864 }, 00:08:56.864 { 00:08:56.864 "nbd_device": "/dev/nbd5", 00:08:56.864 "bdev_name": "Nvme2n3" 00:08:56.864 }, 00:08:56.864 { 00:08:56.864 "nbd_device": "/dev/nbd6", 00:08:56.864 "bdev_name": "Nvme3n1" 00:08:56.864 } 00:08:56.864 ]' 00:08:56.864 14:03:59 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:56.864 14:03:59 -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:56.864 { 00:08:56.864 "nbd_device": "/dev/nbd0", 00:08:56.864 "bdev_name": "Nvme0n1p1" 00:08:56.864 }, 00:08:56.864 { 00:08:56.864 "nbd_device": "/dev/nbd1", 00:08:56.864 "bdev_name": "Nvme0n1p2" 00:08:56.864 }, 00:08:56.864 { 00:08:56.864 "nbd_device": "/dev/nbd2", 00:08:56.864 "bdev_name": "Nvme1n1" 00:08:56.864 }, 00:08:56.864 { 00:08:56.864 "nbd_device": "/dev/nbd3", 00:08:56.864 "bdev_name": "Nvme2n1" 00:08:56.864 }, 00:08:56.864 { 00:08:56.864 "nbd_device": "/dev/nbd4", 00:08:56.864 "bdev_name": "Nvme2n2" 00:08:56.864 }, 00:08:56.864 { 00:08:56.864 "nbd_device": "/dev/nbd5", 00:08:56.864 "bdev_name": "Nvme2n3" 00:08:56.864 }, 00:08:56.864 { 00:08:56.864 "nbd_device": "/dev/nbd6", 00:08:56.864 "bdev_name": "Nvme3n1" 00:08:56.864 } 00:08:56.864 ]' 00:08:56.864 14:03:59 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:56.864 14:03:59 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:08:56.864 14:03:59 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:56.864 14:03:59 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:08:56.864 14:03:59 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:56.864 14:03:59 -- bdev/nbd_common.sh@51 -- # local i 00:08:56.864 14:03:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:56.864 14:03:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:57.120 14:03:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:57.120 14:03:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:57.120 14:03:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:57.120 14:03:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:57.120 14:03:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:57.120 14:03:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:57.120 14:03:59 -- bdev/nbd_common.sh@41 -- # break 00:08:57.121 14:03:59 -- bdev/nbd_common.sh@45 -- # return 0 00:08:57.121 14:03:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:57.121 14:03:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:57.378 14:04:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:57.378 14:04:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:57.378 14:04:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:57.378 14:04:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:57.378 14:04:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:57.378 14:04:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:57.378 14:04:00 -- bdev/nbd_common.sh@41 -- # break 00:08:57.378 14:04:00 -- bdev/nbd_common.sh@45 -- # return 0 00:08:57.378 14:04:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:57.378 14:04:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:57.378 14:04:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:57.378 14:04:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:57.378 14:04:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:57.378 14:04:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:57.378 14:04:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:57.378 14:04:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:57.378 14:04:00 -- bdev/nbd_common.sh@41 -- # break 00:08:57.378 14:04:00 -- bdev/nbd_common.sh@45 -- # return 0 00:08:57.378 14:04:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:57.378 14:04:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:57.635 14:04:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:57.635 14:04:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:57.635 14:04:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:57.635 14:04:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:57.635 14:04:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:57.635 14:04:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:57.635 14:04:00 -- bdev/nbd_common.sh@41 -- # break 00:08:57.635 14:04:00 -- bdev/nbd_common.sh@45 -- # return 0 00:08:57.635 14:04:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:57.635 14:04:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:57.892 14:04:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:57.892 14:04:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:57.892 14:04:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:57.892 14:04:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:57.892 14:04:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:57.892 14:04:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:57.892 14:04:00 -- bdev/nbd_common.sh@41 -- # break 00:08:57.892 14:04:00 -- bdev/nbd_common.sh@45 -- # return 0 00:08:57.892 14:04:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:57.892 14:04:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:58.149 14:04:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:58.149 14:04:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:58.149 14:04:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:58.149 14:04:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:58.149 14:04:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:58.149 14:04:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:58.149 14:04:00 -- bdev/nbd_common.sh@41 -- # break 00:08:58.149 14:04:00 -- bdev/nbd_common.sh@45 -- # return 0 00:08:58.149 14:04:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:58.149 14:04:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:08:58.407 14:04:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:08:58.407 14:04:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:08:58.407 14:04:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:08:58.407 14:04:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:58.407 14:04:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:58.407 14:04:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:08:58.407 14:04:01 -- bdev/nbd_common.sh@41 -- # break 00:08:58.407 14:04:01 -- bdev/nbd_common.sh@45 -- # return 0 00:08:58.407 14:04:01 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:58.407 14:04:01 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:58.407 14:04:01 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:58.407 14:04:01 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:58.407 14:04:01 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:58.407 14:04:01 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:58.407 14:04:01 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:58.407 14:04:01 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:58.407 14:04:01 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:58.407 14:04:01 -- bdev/nbd_common.sh@65 -- # true 00:08:58.407 14:04:01 -- bdev/nbd_common.sh@65 -- # count=0 00:08:58.407 14:04:01 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:58.407 14:04:01 -- bdev/nbd_common.sh@122 -- # count=0 00:08:58.407 14:04:01 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:58.407 14:04:01 -- bdev/nbd_common.sh@127 -- # return 0 00:08:58.407 14:04:01 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:58.407 14:04:01 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:58.407 14:04:01 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:58.407 14:04:01 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:58.407 14:04:01 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:58.407 14:04:01 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:58.407 14:04:01 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:58.407 14:04:01 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:58.407 14:04:01 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:58.407 14:04:01 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:58.407 14:04:01 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:58.407 14:04:01 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:58.407 14:04:01 -- bdev/nbd_common.sh@12 -- # local i 00:08:58.407 14:04:01 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:58.407 14:04:01 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:58.407 14:04:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:08:58.663 /dev/nbd0 00:08:58.663 14:04:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:58.663 14:04:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:58.663 14:04:01 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:58.663 14:04:01 -- common/autotest_common.sh@867 -- # local i 00:08:58.663 14:04:01 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:58.663 14:04:01 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:58.663 14:04:01 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:58.663 14:04:01 -- common/autotest_common.sh@871 -- # break 00:08:58.663 14:04:01 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:58.663 14:04:01 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:58.663 14:04:01 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:58.663 1+0 records in 00:08:58.663 1+0 records out 00:08:58.663 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000242444 s, 16.9 MB/s 00:08:58.663 14:04:01 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:58.663 14:04:01 -- common/autotest_common.sh@884 -- # size=4096 00:08:58.663 14:04:01 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:58.663 14:04:01 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:58.663 14:04:01 -- common/autotest_common.sh@887 -- # return 0 00:08:58.663 14:04:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:58.663 14:04:01 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:58.663 14:04:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:08:58.920 /dev/nbd1 00:08:58.920 14:04:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:58.920 14:04:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:58.920 14:04:01 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:58.920 14:04:01 -- common/autotest_common.sh@867 -- # local i 00:08:58.920 14:04:01 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:58.920 14:04:01 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:58.920 14:04:01 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:58.920 14:04:01 -- common/autotest_common.sh@871 -- # break 00:08:58.920 14:04:01 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:58.920 14:04:01 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:58.920 14:04:01 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:58.920 1+0 records in 00:08:58.920 1+0 records out 00:08:58.920 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00029511 s, 13.9 MB/s 00:08:58.920 14:04:01 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:58.920 14:04:01 -- common/autotest_common.sh@884 -- # size=4096 00:08:58.920 14:04:01 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:58.920 14:04:01 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:58.920 14:04:01 -- common/autotest_common.sh@887 -- # return 0 00:08:58.920 14:04:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:58.920 14:04:01 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:58.920 14:04:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd10 00:08:59.177 /dev/nbd10 00:08:59.177 14:04:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:59.177 14:04:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:59.177 14:04:01 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:08:59.177 14:04:01 -- common/autotest_common.sh@867 -- # local i 00:08:59.177 14:04:01 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:59.177 14:04:01 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:59.177 14:04:01 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:08:59.177 14:04:01 -- common/autotest_common.sh@871 -- # break 00:08:59.177 14:04:01 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:59.177 14:04:01 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:59.177 14:04:01 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:59.177 1+0 records in 00:08:59.177 1+0 records out 00:08:59.178 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000443485 s, 9.2 MB/s 00:08:59.178 14:04:01 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:59.178 14:04:01 -- common/autotest_common.sh@884 -- # size=4096 00:08:59.178 14:04:01 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:59.178 14:04:01 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:59.178 14:04:01 -- common/autotest_common.sh@887 -- # return 0 00:08:59.178 14:04:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:59.178 14:04:01 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:59.178 14:04:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:08:59.434 /dev/nbd11 00:08:59.434 14:04:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:59.434 14:04:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:59.434 14:04:02 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:08:59.434 14:04:02 -- common/autotest_common.sh@867 -- # local i 00:08:59.434 14:04:02 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:59.434 14:04:02 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:59.434 14:04:02 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:08:59.434 14:04:02 -- common/autotest_common.sh@871 -- # break 00:08:59.434 14:04:02 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:59.434 14:04:02 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:59.434 14:04:02 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:59.434 1+0 records in 00:08:59.434 1+0 records out 00:08:59.434 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000419252 s, 9.8 MB/s 00:08:59.434 14:04:02 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:59.434 14:04:02 -- common/autotest_common.sh@884 -- # size=4096 00:08:59.434 14:04:02 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:59.434 14:04:02 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:59.434 14:04:02 -- common/autotest_common.sh@887 -- # return 0 00:08:59.434 14:04:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:59.434 14:04:02 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:59.434 14:04:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:08:59.692 /dev/nbd12 00:08:59.692 14:04:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:59.692 14:04:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:59.692 14:04:02 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:08:59.692 14:04:02 -- common/autotest_common.sh@867 -- # local i 00:08:59.692 14:04:02 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:59.692 14:04:02 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:59.692 14:04:02 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:08:59.692 14:04:02 -- common/autotest_common.sh@871 -- # break 00:08:59.692 14:04:02 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:59.692 14:04:02 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:59.692 14:04:02 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:59.692 1+0 records in 00:08:59.692 1+0 records out 00:08:59.692 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000539951 s, 7.6 MB/s 00:08:59.692 14:04:02 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:59.692 14:04:02 -- common/autotest_common.sh@884 -- # size=4096 00:08:59.692 14:04:02 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:59.692 14:04:02 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:59.692 14:04:02 -- common/autotest_common.sh@887 -- # return 0 00:08:59.692 14:04:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:59.692 14:04:02 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:59.692 14:04:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:08:59.692 /dev/nbd13 00:08:59.692 14:04:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:59.692 14:04:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:59.692 14:04:02 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:08:59.692 14:04:02 -- common/autotest_common.sh@867 -- # local i 00:08:59.692 14:04:02 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:59.692 14:04:02 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:59.692 14:04:02 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:08:59.692 14:04:02 -- common/autotest_common.sh@871 -- # break 00:08:59.692 14:04:02 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:59.692 14:04:02 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:59.692 14:04:02 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:59.692 1+0 records in 00:08:59.692 1+0 records out 00:08:59.692 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00046013 s, 8.9 MB/s 00:08:59.692 14:04:02 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:59.692 14:04:02 -- common/autotest_common.sh@884 -- # size=4096 00:08:59.692 14:04:02 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:59.692 14:04:02 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:59.692 14:04:02 -- common/autotest_common.sh@887 -- # return 0 00:08:59.692 14:04:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:59.692 14:04:02 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:59.692 14:04:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:08:59.949 /dev/nbd14 00:08:59.949 14:04:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:08:59.949 14:04:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:08:59.949 14:04:02 -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:08:59.949 14:04:02 -- common/autotest_common.sh@867 -- # local i 00:08:59.949 14:04:02 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:59.949 14:04:02 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:59.949 14:04:02 -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:08:59.949 14:04:02 -- common/autotest_common.sh@871 -- # break 00:08:59.949 14:04:02 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:59.949 14:04:02 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:59.949 14:04:02 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:59.949 1+0 records in 00:08:59.949 1+0 records out 00:08:59.949 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000454577 s, 9.0 MB/s 00:08:59.949 14:04:02 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:59.949 14:04:02 -- common/autotest_common.sh@884 -- # size=4096 00:08:59.949 14:04:02 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:59.949 14:04:02 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:59.949 14:04:02 -- common/autotest_common.sh@887 -- # return 0 00:08:59.949 14:04:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:59.949 14:04:02 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:59.949 14:04:02 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:59.949 14:04:02 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:59.949 14:04:02 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:00.207 14:04:02 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:00.207 { 00:09:00.207 "nbd_device": "/dev/nbd0", 00:09:00.207 "bdev_name": "Nvme0n1p1" 00:09:00.207 }, 00:09:00.207 { 00:09:00.207 "nbd_device": "/dev/nbd1", 00:09:00.207 "bdev_name": "Nvme0n1p2" 00:09:00.207 }, 00:09:00.207 { 00:09:00.207 "nbd_device": "/dev/nbd10", 00:09:00.207 "bdev_name": "Nvme1n1" 00:09:00.207 }, 00:09:00.207 { 00:09:00.207 "nbd_device": "/dev/nbd11", 00:09:00.207 "bdev_name": "Nvme2n1" 00:09:00.207 }, 00:09:00.207 { 00:09:00.207 "nbd_device": "/dev/nbd12", 00:09:00.207 "bdev_name": "Nvme2n2" 00:09:00.207 }, 00:09:00.207 { 00:09:00.207 "nbd_device": "/dev/nbd13", 00:09:00.207 "bdev_name": "Nvme2n3" 00:09:00.207 }, 00:09:00.207 { 00:09:00.207 "nbd_device": "/dev/nbd14", 00:09:00.207 "bdev_name": "Nvme3n1" 00:09:00.207 } 00:09:00.207 ]' 00:09:00.207 14:04:02 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:00.207 { 00:09:00.207 "nbd_device": "/dev/nbd0", 00:09:00.207 "bdev_name": "Nvme0n1p1" 00:09:00.207 }, 00:09:00.207 { 00:09:00.207 "nbd_device": "/dev/nbd1", 00:09:00.207 "bdev_name": "Nvme0n1p2" 00:09:00.207 }, 00:09:00.207 { 00:09:00.207 "nbd_device": "/dev/nbd10", 00:09:00.207 "bdev_name": "Nvme1n1" 00:09:00.207 }, 00:09:00.207 { 00:09:00.207 "nbd_device": "/dev/nbd11", 00:09:00.207 "bdev_name": "Nvme2n1" 00:09:00.207 }, 00:09:00.207 { 00:09:00.207 "nbd_device": "/dev/nbd12", 00:09:00.207 "bdev_name": "Nvme2n2" 00:09:00.207 }, 00:09:00.207 { 00:09:00.207 "nbd_device": "/dev/nbd13", 00:09:00.207 "bdev_name": "Nvme2n3" 00:09:00.207 }, 00:09:00.207 { 00:09:00.207 "nbd_device": "/dev/nbd14", 00:09:00.207 "bdev_name": "Nvme3n1" 00:09:00.207 } 00:09:00.207 ]' 00:09:00.207 14:04:02 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:00.207 14:04:03 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:00.207 /dev/nbd1 00:09:00.207 /dev/nbd10 00:09:00.207 /dev/nbd11 00:09:00.207 /dev/nbd12 00:09:00.207 /dev/nbd13 00:09:00.207 /dev/nbd14' 00:09:00.207 14:04:03 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:00.207 /dev/nbd1 00:09:00.207 /dev/nbd10 00:09:00.207 /dev/nbd11 00:09:00.207 /dev/nbd12 00:09:00.207 /dev/nbd13 00:09:00.207 /dev/nbd14' 00:09:00.207 14:04:03 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:00.207 14:04:03 -- bdev/nbd_common.sh@65 -- # count=7 00:09:00.207 14:04:03 -- bdev/nbd_common.sh@66 -- # echo 7 00:09:00.207 14:04:03 -- bdev/nbd_common.sh@95 -- # count=7 00:09:00.207 14:04:03 -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:09:00.207 14:04:03 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:09:00.207 14:04:03 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:00.207 14:04:03 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:00.207 14:04:03 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:00.207 14:04:03 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:00.207 14:04:03 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:00.207 14:04:03 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:09:00.207 256+0 records in 00:09:00.207 256+0 records out 00:09:00.207 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00979608 s, 107 MB/s 00:09:00.207 14:04:03 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:00.207 14:04:03 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:00.464 256+0 records in 00:09:00.464 256+0 records out 00:09:00.464 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0709694 s, 14.8 MB/s 00:09:00.464 14:04:03 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:00.464 14:04:03 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:00.464 256+0 records in 00:09:00.464 256+0 records out 00:09:00.464 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0696693 s, 15.1 MB/s 00:09:00.464 14:04:03 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:00.464 14:04:03 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:09:00.464 256+0 records in 00:09:00.464 256+0 records out 00:09:00.464 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.068243 s, 15.4 MB/s 00:09:00.464 14:04:03 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:00.464 14:04:03 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:09:00.464 256+0 records in 00:09:00.464 256+0 records out 00:09:00.464 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0659535 s, 15.9 MB/s 00:09:00.464 14:04:03 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:00.464 14:04:03 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:09:00.722 256+0 records in 00:09:00.722 256+0 records out 00:09:00.722 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0675922 s, 15.5 MB/s 00:09:00.722 14:04:03 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:00.722 14:04:03 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:09:00.722 256+0 records in 00:09:00.722 256+0 records out 00:09:00.722 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0682995 s, 15.4 MB/s 00:09:00.722 14:04:03 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:00.722 14:04:03 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:09:00.722 256+0 records in 00:09:00.722 256+0 records out 00:09:00.722 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.131923 s, 7.9 MB/s 00:09:00.722 14:04:03 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:09:00.722 14:04:03 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:00.722 14:04:03 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:00.722 14:04:03 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:00.722 14:04:03 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:00.722 14:04:03 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:00.722 14:04:03 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:00.722 14:04:03 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:00.722 14:04:03 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:09:00.722 14:04:03 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:00.722 14:04:03 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:09:00.722 14:04:03 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:00.722 14:04:03 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:09:00.980 14:04:03 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:00.980 14:04:03 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:09:00.980 14:04:03 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:00.980 14:04:03 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:09:00.980 14:04:03 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:00.980 14:04:03 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:09:00.980 14:04:03 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:00.980 14:04:03 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:09:00.980 14:04:03 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:00.980 14:04:03 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:00.980 14:04:03 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:00.980 14:04:03 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:00.980 14:04:03 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:00.980 14:04:03 -- bdev/nbd_common.sh@51 -- # local i 00:09:00.980 14:04:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:00.980 14:04:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:00.980 14:04:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:00.980 14:04:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:00.980 14:04:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:00.980 14:04:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:00.980 14:04:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:00.980 14:04:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:00.980 14:04:03 -- bdev/nbd_common.sh@41 -- # break 00:09:00.980 14:04:03 -- bdev/nbd_common.sh@45 -- # return 0 00:09:00.980 14:04:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:00.980 14:04:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:01.238 14:04:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:01.238 14:04:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:01.238 14:04:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:01.238 14:04:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:01.238 14:04:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:01.238 14:04:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:01.238 14:04:04 -- bdev/nbd_common.sh@41 -- # break 00:09:01.238 14:04:04 -- bdev/nbd_common.sh@45 -- # return 0 00:09:01.238 14:04:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:01.238 14:04:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:09:01.494 14:04:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:09:01.494 14:04:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:09:01.494 14:04:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:09:01.494 14:04:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:01.494 14:04:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:01.494 14:04:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:09:01.494 14:04:04 -- bdev/nbd_common.sh@41 -- # break 00:09:01.494 14:04:04 -- bdev/nbd_common.sh@45 -- # return 0 00:09:01.494 14:04:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:01.494 14:04:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:09:01.751 14:04:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:09:01.751 14:04:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:09:01.751 14:04:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:09:01.751 14:04:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:01.751 14:04:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:01.751 14:04:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:09:01.751 14:04:04 -- bdev/nbd_common.sh@41 -- # break 00:09:01.751 14:04:04 -- bdev/nbd_common.sh@45 -- # return 0 00:09:01.751 14:04:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:01.751 14:04:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:09:01.751 14:04:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:09:01.751 14:04:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:09:01.751 14:04:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:09:01.751 14:04:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:01.751 14:04:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:01.751 14:04:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:09:01.751 14:04:04 -- bdev/nbd_common.sh@41 -- # break 00:09:01.751 14:04:04 -- bdev/nbd_common.sh@45 -- # return 0 00:09:01.751 14:04:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:01.751 14:04:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:09:02.009 14:04:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:09:02.009 14:04:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:09:02.009 14:04:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:09:02.009 14:04:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:02.009 14:04:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:02.009 14:04:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:09:02.009 14:04:04 -- bdev/nbd_common.sh@41 -- # break 00:09:02.009 14:04:04 -- bdev/nbd_common.sh@45 -- # return 0 00:09:02.009 14:04:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:02.009 14:04:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:09:02.266 14:04:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:09:02.266 14:04:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:09:02.266 14:04:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:09:02.266 14:04:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:02.266 14:04:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:02.266 14:04:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:09:02.266 14:04:05 -- bdev/nbd_common.sh@41 -- # break 00:09:02.266 14:04:05 -- bdev/nbd_common.sh@45 -- # return 0 00:09:02.266 14:04:05 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:02.266 14:04:05 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:02.266 14:04:05 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:02.524 14:04:05 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:02.524 14:04:05 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:02.524 14:04:05 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:02.524 14:04:05 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:02.524 14:04:05 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:02.524 14:04:05 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:02.524 14:04:05 -- bdev/nbd_common.sh@65 -- # true 00:09:02.524 14:04:05 -- bdev/nbd_common.sh@65 -- # count=0 00:09:02.524 14:04:05 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:02.524 14:04:05 -- bdev/nbd_common.sh@104 -- # count=0 00:09:02.524 14:04:05 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:02.524 14:04:05 -- bdev/nbd_common.sh@109 -- # return 0 00:09:02.524 14:04:05 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:02.524 14:04:05 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:02.524 14:04:05 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:02.524 14:04:05 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:09:02.524 14:04:05 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:09:02.524 14:04:05 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:09:02.781 malloc_lvol_verify 00:09:02.781 14:04:05 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:09:02.781 876b6652-b3b2-48d5-86be-70b07ceac60a 00:09:02.781 14:04:05 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:09:03.038 dc9f7c79-21e4-4c70-b337-682ec03c37c8 00:09:03.038 14:04:05 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:09:03.296 /dev/nbd0 00:09:03.296 14:04:06 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:09:03.296 mke2fs 1.47.0 (5-Feb-2023) 00:09:03.296 Discarding device blocks: 0/4096 done 00:09:03.296 Creating filesystem with 4096 1k blocks and 1024 inodes 00:09:03.296 00:09:03.296 Allocating group tables: 0/1 done 00:09:03.296 Writing inode tables: 0/1 done 00:09:03.296 Creating journal (1024 blocks): done 00:09:03.296 Writing superblocks and filesystem accounting information: 0/1 done 00:09:03.296 00:09:03.296 14:04:06 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:09:03.296 14:04:06 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:03.296 14:04:06 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:03.296 14:04:06 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:03.296 14:04:06 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:03.296 14:04:06 -- bdev/nbd_common.sh@51 -- # local i 00:09:03.296 14:04:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:03.296 14:04:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:03.553 14:04:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:03.553 14:04:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:03.553 14:04:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:03.553 14:04:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:03.553 14:04:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:03.553 14:04:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:03.553 14:04:06 -- bdev/nbd_common.sh@41 -- # break 00:09:03.553 14:04:06 -- bdev/nbd_common.sh@45 -- # return 0 00:09:03.553 14:04:06 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:09:03.553 14:04:06 -- bdev/nbd_common.sh@147 -- # return 0 00:09:03.553 14:04:06 -- bdev/blockdev.sh@324 -- # killprocess 62051 00:09:03.553 14:04:06 -- common/autotest_common.sh@936 -- # '[' -z 62051 ']' 00:09:03.553 14:04:06 -- common/autotest_common.sh@940 -- # kill -0 62051 00:09:03.553 14:04:06 -- common/autotest_common.sh@941 -- # uname 00:09:03.553 14:04:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:03.553 14:04:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62051 00:09:03.553 14:04:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:03.553 14:04:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:03.553 killing process with pid 62051 00:09:03.553 14:04:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62051' 00:09:03.553 14:04:06 -- common/autotest_common.sh@955 -- # kill 62051 00:09:03.553 14:04:06 -- common/autotest_common.sh@960 -- # wait 62051 00:09:04.117 ************************************ 00:09:04.117 END TEST bdev_nbd 00:09:04.117 ************************************ 00:09:04.117 14:04:06 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:09:04.117 00:09:04.117 real 0m10.601s 00:09:04.117 user 0m14.985s 00:09:04.117 sys 0m3.297s 00:09:04.117 14:04:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:04.117 14:04:06 -- common/autotest_common.sh@10 -- # set +x 00:09:04.117 skipping fio tests on NVMe due to multi-ns failures. 00:09:04.117 14:04:07 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:09:04.117 14:04:07 -- bdev/blockdev.sh@762 -- # '[' gpt = nvme ']' 00:09:04.117 14:04:07 -- bdev/blockdev.sh@762 -- # '[' gpt = gpt ']' 00:09:04.117 14:04:07 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:09:04.117 14:04:07 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:04.117 14:04:07 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:04.117 14:04:07 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:09:04.117 14:04:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:04.118 14:04:07 -- common/autotest_common.sh@10 -- # set +x 00:09:04.118 ************************************ 00:09:04.118 START TEST bdev_verify 00:09:04.118 ************************************ 00:09:04.118 14:04:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:04.375 [2024-12-08 14:04:07.085503] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:04.375 [2024-12-08 14:04:07.085619] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62464 ] 00:09:04.375 [2024-12-08 14:04:07.231068] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:04.632 [2024-12-08 14:04:07.396200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:04.632 [2024-12-08 14:04:07.396287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.208 Running I/O for 5 seconds... 00:09:10.535 00:09:10.535 Latency(us) 00:09:10.535 [2024-12-08T14:04:13.455Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:10.535 [2024-12-08T14:04:13.455Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:10.535 Verification LBA range: start 0x0 length 0x5e800 00:09:10.535 Nvme0n1p1 : 5.03 3253.78 12.71 0.00 0.00 39242.34 5671.38 50815.61 00:09:10.535 [2024-12-08T14:04:13.455Z] Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:10.535 Verification LBA range: start 0x5e800 length 0x5e800 00:09:10.535 Nvme0n1p1 : 5.05 2954.86 11.54 0.00 0.00 43040.38 12905.55 40733.14 00:09:10.535 [2024-12-08T14:04:13.455Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:10.535 Verification LBA range: start 0x0 length 0x5e7ff 00:09:10.535 Nvme0n1p2 : 5.04 3259.20 12.73 0.00 0.00 39134.17 3276.80 44161.18 00:09:10.535 [2024-12-08T14:04:13.455Z] Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:10.535 Verification LBA range: start 0x5e7ff length 0x5e7ff 00:09:10.535 Nvme0n1p2 : 5.05 2960.67 11.57 0.00 0.00 42971.04 1115.37 40733.14 00:09:10.535 [2024-12-08T14:04:13.455Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:10.535 Verification LBA range: start 0x0 length 0xa0000 00:09:10.535 Nvme1n1 : 5.04 3258.58 12.73 0.00 0.00 39107.07 3705.30 42144.69 00:09:10.535 [2024-12-08T14:04:13.455Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:10.535 Verification LBA range: start 0xa0000 length 0xa0000 00:09:10.535 Nvme1n1 : 5.06 2959.95 11.56 0.00 0.00 42943.49 1663.61 41136.44 00:09:10.535 [2024-12-08T14:04:13.455Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:10.535 Verification LBA range: start 0x0 length 0x80000 00:09:10.535 Nvme2n1 : 5.04 3256.73 12.72 0.00 0.00 39086.41 5948.65 39724.90 00:09:10.535 [2024-12-08T14:04:13.455Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:10.535 Verification LBA range: start 0x80000 length 0x80000 00:09:10.535 Nvme2n1 : 5.04 2961.34 11.57 0.00 0.00 43117.32 5671.38 47185.92 00:09:10.535 [2024-12-08T14:04:13.455Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:10.535 Verification LBA range: start 0x0 length 0x80000 00:09:10.535 Nvme2n2 : 5.04 3255.12 12.72 0.00 0.00 39056.72 7763.50 37305.11 00:09:10.535 [2024-12-08T14:04:13.455Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:10.535 Verification LBA range: start 0x80000 length 0x80000 00:09:10.535 Nvme2n2 : 5.04 2959.52 11.56 0.00 0.00 43104.39 7864.32 44766.13 00:09:10.535 [2024-12-08T14:04:13.455Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:10.535 Verification LBA range: start 0x0 length 0x80000 00:09:10.535 Nvme2n3 : 5.05 3261.24 12.74 0.00 0.00 38974.51 781.39 36095.21 00:09:10.535 [2024-12-08T14:04:13.455Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:10.535 Verification LBA range: start 0x80000 length 0x80000 00:09:10.535 Nvme2n3 : 5.04 2957.89 11.55 0.00 0.00 43096.53 9880.81 44161.18 00:09:10.535 [2024-12-08T14:04:13.455Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:10.535 Verification LBA range: start 0x0 length 0x20000 00:09:10.535 Nvme3n1 : 5.05 3259.48 12.73 0.00 0.00 38956.56 2873.50 36095.21 00:09:10.535 [2024-12-08T14:04:13.455Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:10.535 Verification LBA range: start 0x20000 length 0x20000 00:09:10.535 Nvme3n1 : 5.05 2956.64 11.55 0.00 0.00 43064.77 10989.88 42346.34 00:09:10.535 [2024-12-08T14:04:13.455Z] =================================================================================================================== 00:09:10.535 [2024-12-08T14:04:13.455Z] Total : 43515.00 169.98 0.00 0.00 40969.52 781.39 50815.61 00:09:15.789 00:09:15.789 real 0m11.606s 00:09:15.789 user 0m22.106s 00:09:15.789 sys 0m0.293s 00:09:15.789 14:04:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:15.789 14:04:18 -- common/autotest_common.sh@10 -- # set +x 00:09:15.789 ************************************ 00:09:15.789 END TEST bdev_verify 00:09:15.789 ************************************ 00:09:15.789 14:04:18 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:15.789 14:04:18 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:09:15.789 14:04:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:15.789 14:04:18 -- common/autotest_common.sh@10 -- # set +x 00:09:15.789 ************************************ 00:09:15.789 START TEST bdev_verify_big_io 00:09:15.789 ************************************ 00:09:15.789 14:04:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:16.047 [2024-12-08 14:04:18.727452] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:16.047 [2024-12-08 14:04:18.727544] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62581 ] 00:09:16.047 [2024-12-08 14:04:18.871643] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:16.305 [2024-12-08 14:04:19.036849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:16.305 [2024-12-08 14:04:19.036917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.870 Running I/O for 5 seconds... 00:09:23.429 00:09:23.429 Latency(us) 00:09:23.429 [2024-12-08T14:04:26.349Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:23.429 [2024-12-08T14:04:26.349Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:23.429 Verification LBA range: start 0x0 length 0x5e80 00:09:23.429 Nvme0n1p1 : 5.39 274.25 17.14 0.00 0.00 457886.40 29642.44 777559.43 00:09:23.429 [2024-12-08T14:04:26.349Z] Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:23.429 Verification LBA range: start 0x5e80 length 0x5e80 00:09:23.429 Nvme0n1p1 : 5.38 274.69 17.17 0.00 0.00 458421.90 25710.28 700126.13 00:09:23.429 [2024-12-08T14:04:26.349Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:23.429 Verification LBA range: start 0x0 length 0x5e7f 00:09:23.429 Nvme0n1p2 : 5.39 274.17 17.14 0.00 0.00 450495.09 29844.09 703352.52 00:09:23.429 [2024-12-08T14:04:26.349Z] Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:23.429 Verification LBA range: start 0x5e7f length 0x5e7f 00:09:23.429 Nvme0n1p2 : 5.38 274.61 17.16 0.00 0.00 453031.67 26012.75 638824.76 00:09:23.429 [2024-12-08T14:04:26.349Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:23.429 Verification LBA range: start 0x0 length 0xa000 00:09:23.429 Nvme1n1 : 5.40 281.98 17.62 0.00 0.00 433697.98 11846.89 629145.60 00:09:23.429 [2024-12-08T14:04:26.349Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:23.429 Verification LBA range: start 0xa000 length 0xa000 00:09:23.429 Nvme1n1 : 5.38 274.53 17.16 0.00 0.00 447621.52 26617.70 587202.56 00:09:23.429 [2024-12-08T14:04:26.349Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:23.429 Verification LBA range: start 0x0 length 0x8000 00:09:23.429 Nvme2n1 : 5.41 281.90 17.62 0.00 0.00 426359.63 12300.60 554938.68 00:09:23.429 [2024-12-08T14:04:26.349Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:23.429 Verification LBA range: start 0x8000 length 0x8000 00:09:23.429 Nvme2n1 : 5.39 282.55 17.66 0.00 0.00 432517.70 7965.14 538806.74 00:09:23.429 [2024-12-08T14:04:26.349Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:23.429 Verification LBA range: start 0x0 length 0x8000 00:09:23.429 Nvme2n2 : 5.42 289.08 18.07 0.00 0.00 409138.18 12653.49 790464.98 00:09:23.429 [2024-12-08T14:04:26.349Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:23.429 Verification LBA range: start 0x8000 length 0x8000 00:09:23.429 Nvme2n2 : 5.40 282.47 17.65 0.00 0.00 427202.45 8570.09 503316.48 00:09:23.429 [2024-12-08T14:04:26.349Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:23.429 Verification LBA range: start 0x0 length 0x8000 00:09:23.429 Nvme2n3 : 5.48 325.84 20.36 0.00 0.00 357576.97 8469.27 803370.54 00:09:23.429 [2024-12-08T14:04:26.349Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:23.429 Verification LBA range: start 0x8000 length 0x8000 00:09:23.430 Nvme2n3 : 5.40 290.24 18.14 0.00 0.00 411402.64 1877.86 519448.42 00:09:23.430 [2024-12-08T14:04:26.350Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:23.430 Verification LBA range: start 0x0 length 0x2000 00:09:23.430 Nvme3n1 : 5.53 391.64 24.48 0.00 0.00 294192.47 444.26 806596.92 00:09:23.430 [2024-12-08T14:04:26.350Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:23.430 Verification LBA range: start 0x2000 length 0x2000 00:09:23.430 Nvme3n1 : 5.40 297.96 18.62 0.00 0.00 395551.84 2029.10 532353.97 00:09:23.430 [2024-12-08T14:04:26.350Z] =================================================================================================================== 00:09:23.430 [2024-12-08T14:04:26.350Z] Total : 4095.92 256.00 0.00 0.00 413433.63 444.26 806596.92 00:09:24.803 00:09:24.803 real 0m8.645s 00:09:24.803 user 0m16.228s 00:09:24.803 sys 0m0.275s 00:09:24.803 14:04:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:24.803 14:04:27 -- common/autotest_common.sh@10 -- # set +x 00:09:24.803 ************************************ 00:09:24.803 END TEST bdev_verify_big_io 00:09:24.803 ************************************ 00:09:24.803 14:04:27 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:24.803 14:04:27 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:09:24.803 14:04:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:24.803 14:04:27 -- common/autotest_common.sh@10 -- # set +x 00:09:24.803 ************************************ 00:09:24.803 START TEST bdev_write_zeroes 00:09:24.803 ************************************ 00:09:24.803 14:04:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:24.803 [2024-12-08 14:04:27.423484] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:24.803 [2024-12-08 14:04:27.423592] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62693 ] 00:09:24.803 [2024-12-08 14:04:27.568894] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.061 [2024-12-08 14:04:27.731437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.626 Running I/O for 1 seconds... 00:09:26.558 00:09:26.558 Latency(us) 00:09:26.558 [2024-12-08T14:04:29.478Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:26.558 [2024-12-08T14:04:29.478Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:26.558 Nvme0n1p1 : 1.01 10152.65 39.66 0.00 0.00 12573.16 9931.22 26214.40 00:09:26.558 [2024-12-08T14:04:29.478Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:26.558 Nvme0n1p2 : 1.02 10140.21 39.61 0.00 0.00 12571.41 10132.87 27222.65 00:09:26.558 [2024-12-08T14:04:29.478Z] Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:26.558 Nvme1n1 : 1.02 10128.73 39.57 0.00 0.00 12563.95 10082.46 26012.75 00:09:26.558 [2024-12-08T14:04:29.478Z] Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:26.558 Nvme2n1 : 1.02 10117.36 39.52 0.00 0.00 12562.51 9931.22 26617.70 00:09:26.558 [2024-12-08T14:04:29.478Z] Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:26.558 Nvme2n2 : 1.02 10105.85 39.48 0.00 0.00 12535.02 9931.22 24399.56 00:09:26.558 [2024-12-08T14:04:29.478Z] Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:26.558 Nvme2n3 : 1.02 10150.21 39.65 0.00 0.00 12480.25 7763.50 22181.42 00:09:26.558 [2024-12-08T14:04:29.478Z] Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:26.558 Nvme3n1 : 1.02 10138.87 39.60 0.00 0.00 12477.35 8166.79 21979.77 00:09:26.558 [2024-12-08T14:04:29.478Z] =================================================================================================================== 00:09:26.558 [2024-12-08T14:04:29.478Z] Total : 70933.88 277.09 0.00 0.00 12537.56 7763.50 27222.65 00:09:27.487 00:09:27.487 real 0m2.774s 00:09:27.487 user 0m2.466s 00:09:27.487 sys 0m0.196s 00:09:27.487 14:04:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:27.487 14:04:30 -- common/autotest_common.sh@10 -- # set +x 00:09:27.487 ************************************ 00:09:27.487 END TEST bdev_write_zeroes 00:09:27.487 ************************************ 00:09:27.487 14:04:30 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:27.487 14:04:30 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:09:27.487 14:04:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:27.487 14:04:30 -- common/autotest_common.sh@10 -- # set +x 00:09:27.487 ************************************ 00:09:27.487 START TEST bdev_json_nonenclosed 00:09:27.487 ************************************ 00:09:27.487 14:04:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:27.487 [2024-12-08 14:04:30.241145] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:27.487 [2024-12-08 14:04:30.241261] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62746 ] 00:09:27.487 [2024-12-08 14:04:30.388031] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.745 [2024-12-08 14:04:30.551581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.745 [2024-12-08 14:04:30.551715] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:27.745 [2024-12-08 14:04:30.551737] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:28.003 00:09:28.003 real 0m0.598s 00:09:28.003 user 0m0.397s 00:09:28.003 sys 0m0.097s 00:09:28.003 14:04:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:28.003 14:04:30 -- common/autotest_common.sh@10 -- # set +x 00:09:28.003 ************************************ 00:09:28.003 END TEST bdev_json_nonenclosed 00:09:28.003 ************************************ 00:09:28.003 14:04:30 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:28.003 14:04:30 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:09:28.003 14:04:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:28.003 14:04:30 -- common/autotest_common.sh@10 -- # set +x 00:09:28.003 ************************************ 00:09:28.003 START TEST bdev_json_nonarray 00:09:28.003 ************************************ 00:09:28.003 14:04:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:28.003 [2024-12-08 14:04:30.874642] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:28.003 [2024-12-08 14:04:30.874733] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62771 ] 00:09:28.259 [2024-12-08 14:04:31.017649] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.515 [2024-12-08 14:04:31.179210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.515 [2024-12-08 14:04:31.179347] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:28.515 [2024-12-08 14:04:31.179363] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:28.515 00:09:28.515 real 0m0.590s 00:09:28.515 user 0m0.391s 00:09:28.515 sys 0m0.094s 00:09:28.515 14:04:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:28.515 14:04:31 -- common/autotest_common.sh@10 -- # set +x 00:09:28.515 ************************************ 00:09:28.515 END TEST bdev_json_nonarray 00:09:28.515 ************************************ 00:09:28.772 14:04:31 -- bdev/blockdev.sh@785 -- # [[ gpt == bdev ]] 00:09:28.772 14:04:31 -- bdev/blockdev.sh@792 -- # [[ gpt == gpt ]] 00:09:28.772 14:04:31 -- bdev/blockdev.sh@793 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:09:28.772 14:04:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:28.772 14:04:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:28.772 14:04:31 -- common/autotest_common.sh@10 -- # set +x 00:09:28.772 ************************************ 00:09:28.772 START TEST bdev_gpt_uuid 00:09:28.772 ************************************ 00:09:28.772 14:04:31 -- common/autotest_common.sh@1114 -- # bdev_gpt_uuid 00:09:28.772 14:04:31 -- bdev/blockdev.sh@612 -- # local bdev 00:09:28.772 14:04:31 -- bdev/blockdev.sh@614 -- # start_spdk_tgt 00:09:28.772 14:04:31 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=62797 00:09:28.772 14:04:31 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:28.772 14:04:31 -- bdev/blockdev.sh@47 -- # waitforlisten 62797 00:09:28.772 14:04:31 -- common/autotest_common.sh@829 -- # '[' -z 62797 ']' 00:09:28.772 14:04:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.772 14:04:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:28.772 14:04:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.772 14:04:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:28.772 14:04:31 -- common/autotest_common.sh@10 -- # set +x 00:09:28.772 14:04:31 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:28.772 [2024-12-08 14:04:31.519799] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:28.772 [2024-12-08 14:04:31.519889] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62797 ] 00:09:28.772 [2024-12-08 14:04:31.662962] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.029 [2024-12-08 14:04:31.827163] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:29.029 [2024-12-08 14:04:31.827331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.401 14:04:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:30.401 14:04:32 -- common/autotest_common.sh@862 -- # return 0 00:09:30.401 14:04:32 -- bdev/blockdev.sh@616 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:30.401 14:04:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.401 14:04:32 -- common/autotest_common.sh@10 -- # set +x 00:09:30.401 Some configs were skipped because the RPC state that can call them passed over. 00:09:30.401 14:04:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.401 14:04:33 -- bdev/blockdev.sh@617 -- # rpc_cmd bdev_wait_for_examine 00:09:30.401 14:04:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.401 14:04:33 -- common/autotest_common.sh@10 -- # set +x 00:09:30.401 14:04:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.401 14:04:33 -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:09:30.401 14:04:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.401 14:04:33 -- common/autotest_common.sh@10 -- # set +x 00:09:30.401 14:04:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.401 14:04:33 -- bdev/blockdev.sh@619 -- # bdev='[ 00:09:30.401 { 00:09:30.401 "name": "Nvme0n1p1", 00:09:30.401 "aliases": [ 00:09:30.401 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:09:30.401 ], 00:09:30.401 "product_name": "GPT Disk", 00:09:30.401 "block_size": 4096, 00:09:30.401 "num_blocks": 774144, 00:09:30.401 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:30.401 "md_size": 64, 00:09:30.401 "md_interleave": false, 00:09:30.401 "dif_type": 0, 00:09:30.401 "assigned_rate_limits": { 00:09:30.401 "rw_ios_per_sec": 0, 00:09:30.401 "rw_mbytes_per_sec": 0, 00:09:30.401 "r_mbytes_per_sec": 0, 00:09:30.401 "w_mbytes_per_sec": 0 00:09:30.401 }, 00:09:30.401 "claimed": false, 00:09:30.401 "zoned": false, 00:09:30.401 "supported_io_types": { 00:09:30.401 "read": true, 00:09:30.401 "write": true, 00:09:30.401 "unmap": true, 00:09:30.401 "write_zeroes": true, 00:09:30.401 "flush": true, 00:09:30.401 "reset": true, 00:09:30.401 "compare": true, 00:09:30.401 "compare_and_write": false, 00:09:30.401 "abort": true, 00:09:30.401 "nvme_admin": false, 00:09:30.401 "nvme_io": false 00:09:30.401 }, 00:09:30.401 "driver_specific": { 00:09:30.401 "gpt": { 00:09:30.401 "base_bdev": "Nvme0n1", 00:09:30.401 "offset_blocks": 256, 00:09:30.401 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:09:30.401 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:30.401 "partition_name": "SPDK_TEST_first" 00:09:30.401 } 00:09:30.401 } 00:09:30.401 } 00:09:30.401 ]' 00:09:30.401 14:04:33 -- bdev/blockdev.sh@620 -- # jq -r length 00:09:30.659 14:04:33 -- bdev/blockdev.sh@620 -- # [[ 1 == \1 ]] 00:09:30.659 14:04:33 -- bdev/blockdev.sh@621 -- # jq -r '.[0].aliases[0]' 00:09:30.659 14:04:33 -- bdev/blockdev.sh@621 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:30.659 14:04:33 -- bdev/blockdev.sh@622 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:30.659 14:04:33 -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:30.659 14:04:33 -- bdev/blockdev.sh@624 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:30.659 14:04:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.659 14:04:33 -- common/autotest_common.sh@10 -- # set +x 00:09:30.659 14:04:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.659 14:04:33 -- bdev/blockdev.sh@624 -- # bdev='[ 00:09:30.659 { 00:09:30.659 "name": "Nvme0n1p2", 00:09:30.659 "aliases": [ 00:09:30.659 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:09:30.659 ], 00:09:30.659 "product_name": "GPT Disk", 00:09:30.659 "block_size": 4096, 00:09:30.659 "num_blocks": 774143, 00:09:30.659 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:30.659 "md_size": 64, 00:09:30.659 "md_interleave": false, 00:09:30.659 "dif_type": 0, 00:09:30.659 "assigned_rate_limits": { 00:09:30.659 "rw_ios_per_sec": 0, 00:09:30.659 "rw_mbytes_per_sec": 0, 00:09:30.659 "r_mbytes_per_sec": 0, 00:09:30.659 "w_mbytes_per_sec": 0 00:09:30.659 }, 00:09:30.659 "claimed": false, 00:09:30.659 "zoned": false, 00:09:30.659 "supported_io_types": { 00:09:30.659 "read": true, 00:09:30.659 "write": true, 00:09:30.659 "unmap": true, 00:09:30.659 "write_zeroes": true, 00:09:30.659 "flush": true, 00:09:30.659 "reset": true, 00:09:30.659 "compare": true, 00:09:30.659 "compare_and_write": false, 00:09:30.659 "abort": true, 00:09:30.659 "nvme_admin": false, 00:09:30.659 "nvme_io": false 00:09:30.659 }, 00:09:30.659 "driver_specific": { 00:09:30.659 "gpt": { 00:09:30.659 "base_bdev": "Nvme0n1", 00:09:30.659 "offset_blocks": 774400, 00:09:30.659 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:09:30.659 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:30.659 "partition_name": "SPDK_TEST_second" 00:09:30.659 } 00:09:30.659 } 00:09:30.659 } 00:09:30.659 ]' 00:09:30.659 14:04:33 -- bdev/blockdev.sh@625 -- # jq -r length 00:09:30.659 14:04:33 -- bdev/blockdev.sh@625 -- # [[ 1 == \1 ]] 00:09:30.659 14:04:33 -- bdev/blockdev.sh@626 -- # jq -r '.[0].aliases[0]' 00:09:30.659 14:04:33 -- bdev/blockdev.sh@626 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:30.659 14:04:33 -- bdev/blockdev.sh@627 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:30.659 14:04:33 -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:30.659 14:04:33 -- bdev/blockdev.sh@629 -- # killprocess 62797 00:09:30.659 14:04:33 -- common/autotest_common.sh@936 -- # '[' -z 62797 ']' 00:09:30.659 14:04:33 -- common/autotest_common.sh@940 -- # kill -0 62797 00:09:30.659 14:04:33 -- common/autotest_common.sh@941 -- # uname 00:09:30.659 14:04:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:30.659 14:04:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62797 00:09:30.659 14:04:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:30.659 killing process with pid 62797 00:09:30.659 14:04:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:30.659 14:04:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62797' 00:09:30.660 14:04:33 -- common/autotest_common.sh@955 -- # kill 62797 00:09:30.660 14:04:33 -- common/autotest_common.sh@960 -- # wait 62797 00:09:32.047 00:09:32.047 real 0m3.295s 00:09:32.047 user 0m3.492s 00:09:32.047 sys 0m0.402s 00:09:32.047 14:04:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:32.047 ************************************ 00:09:32.047 14:04:34 -- common/autotest_common.sh@10 -- # set +x 00:09:32.047 END TEST bdev_gpt_uuid 00:09:32.047 ************************************ 00:09:32.047 14:04:34 -- bdev/blockdev.sh@796 -- # [[ gpt == crypto_sw ]] 00:09:32.047 14:04:34 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:09:32.047 14:04:34 -- bdev/blockdev.sh@809 -- # cleanup 00:09:32.047 14:04:34 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:32.047 14:04:34 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:32.047 14:04:34 -- bdev/blockdev.sh@24 -- # [[ gpt == rbd ]] 00:09:32.047 14:04:34 -- bdev/blockdev.sh@28 -- # [[ gpt == daos ]] 00:09:32.047 14:04:34 -- bdev/blockdev.sh@32 -- # [[ gpt = \g\p\t ]] 00:09:32.047 14:04:34 -- bdev/blockdev.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:32.317 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:32.317 Waiting for block devices as requested 00:09:32.573 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:09:32.573 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:09:32.573 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:09:32.573 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:09:37.830 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:09:37.830 14:04:40 -- bdev/blockdev.sh@34 -- # [[ -b /dev/nvme2n1 ]] 00:09:37.830 14:04:40 -- bdev/blockdev.sh@35 -- # wipefs --all /dev/nvme2n1 00:09:38.088 /dev/nvme2n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:09:38.088 /dev/nvme2n1: 8 bytes were erased at offset 0x17a179000 (gpt): 45 46 49 20 50 41 52 54 00:09:38.088 /dev/nvme2n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:09:38.088 /dev/nvme2n1: calling ioctl to re-read partition table: Success 00:09:38.088 14:04:40 -- bdev/blockdev.sh@38 -- # [[ gpt == xnvme ]] 00:09:38.088 00:09:38.088 real 1m1.158s 00:09:38.088 user 1m21.866s 00:09:38.088 sys 0m7.709s 00:09:38.088 14:04:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:38.088 ************************************ 00:09:38.088 END TEST blockdev_nvme_gpt 00:09:38.088 ************************************ 00:09:38.088 14:04:40 -- common/autotest_common.sh@10 -- # set +x 00:09:38.088 14:04:40 -- spdk/autotest.sh@209 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:38.088 14:04:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:38.088 14:04:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:38.088 14:04:40 -- common/autotest_common.sh@10 -- # set +x 00:09:38.088 ************************************ 00:09:38.088 START TEST nvme 00:09:38.088 ************************************ 00:09:38.088 14:04:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:38.088 * Looking for test storage... 00:09:38.088 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:38.088 14:04:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:38.088 14:04:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:38.088 14:04:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:38.088 14:04:40 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:38.088 14:04:40 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:38.088 14:04:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:38.088 14:04:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:38.088 14:04:40 -- scripts/common.sh@335 -- # IFS=.-: 00:09:38.088 14:04:40 -- scripts/common.sh@335 -- # read -ra ver1 00:09:38.088 14:04:40 -- scripts/common.sh@336 -- # IFS=.-: 00:09:38.088 14:04:40 -- scripts/common.sh@336 -- # read -ra ver2 00:09:38.088 14:04:40 -- scripts/common.sh@337 -- # local 'op=<' 00:09:38.088 14:04:40 -- scripts/common.sh@339 -- # ver1_l=2 00:09:38.088 14:04:40 -- scripts/common.sh@340 -- # ver2_l=1 00:09:38.088 14:04:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:38.088 14:04:40 -- scripts/common.sh@343 -- # case "$op" in 00:09:38.088 14:04:40 -- scripts/common.sh@344 -- # : 1 00:09:38.088 14:04:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:38.088 14:04:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:38.088 14:04:40 -- scripts/common.sh@364 -- # decimal 1 00:09:38.088 14:04:40 -- scripts/common.sh@352 -- # local d=1 00:09:38.088 14:04:40 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:38.088 14:04:40 -- scripts/common.sh@354 -- # echo 1 00:09:38.088 14:04:40 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:38.088 14:04:40 -- scripts/common.sh@365 -- # decimal 2 00:09:38.088 14:04:40 -- scripts/common.sh@352 -- # local d=2 00:09:38.088 14:04:40 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:38.088 14:04:40 -- scripts/common.sh@354 -- # echo 2 00:09:38.088 14:04:40 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:38.088 14:04:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:38.088 14:04:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:38.088 14:04:40 -- scripts/common.sh@367 -- # return 0 00:09:38.088 14:04:40 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:38.088 14:04:40 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:38.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.088 --rc genhtml_branch_coverage=1 00:09:38.088 --rc genhtml_function_coverage=1 00:09:38.088 --rc genhtml_legend=1 00:09:38.088 --rc geninfo_all_blocks=1 00:09:38.088 --rc geninfo_unexecuted_blocks=1 00:09:38.088 00:09:38.088 ' 00:09:38.088 14:04:40 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:38.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.088 --rc genhtml_branch_coverage=1 00:09:38.088 --rc genhtml_function_coverage=1 00:09:38.088 --rc genhtml_legend=1 00:09:38.088 --rc geninfo_all_blocks=1 00:09:38.088 --rc geninfo_unexecuted_blocks=1 00:09:38.088 00:09:38.088 ' 00:09:38.088 14:04:40 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:38.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.088 --rc genhtml_branch_coverage=1 00:09:38.088 --rc genhtml_function_coverage=1 00:09:38.088 --rc genhtml_legend=1 00:09:38.088 --rc geninfo_all_blocks=1 00:09:38.088 --rc geninfo_unexecuted_blocks=1 00:09:38.088 00:09:38.088 ' 00:09:38.088 14:04:40 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:38.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.088 --rc genhtml_branch_coverage=1 00:09:38.088 --rc genhtml_function_coverage=1 00:09:38.088 --rc genhtml_legend=1 00:09:38.088 --rc geninfo_all_blocks=1 00:09:38.088 --rc geninfo_unexecuted_blocks=1 00:09:38.088 00:09:38.088 ' 00:09:38.088 14:04:40 -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:39.031 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:39.291 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:09:39.291 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:09:39.291 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:09:39.291 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:09:39.291 14:04:42 -- nvme/nvme.sh@79 -- # uname 00:09:39.291 14:04:42 -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:09:39.291 14:04:42 -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:09:39.291 14:04:42 -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:09:39.291 14:04:42 -- common/autotest_common.sh@1068 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:09:39.291 14:04:42 -- common/autotest_common.sh@1054 -- # _randomize_va_space=2 00:09:39.291 14:04:42 -- common/autotest_common.sh@1055 -- # echo 0 00:09:39.291 14:04:42 -- common/autotest_common.sh@1057 -- # stubpid=63472 00:09:39.291 Waiting for stub to ready for secondary processes... 00:09:39.291 14:04:42 -- common/autotest_common.sh@1058 -- # echo Waiting for stub to ready for secondary processes... 00:09:39.291 14:04:42 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:39.291 14:04:42 -- common/autotest_common.sh@1061 -- # [[ -e /proc/63472 ]] 00:09:39.291 14:04:42 -- common/autotest_common.sh@1062 -- # sleep 1s 00:09:39.291 14:04:42 -- common/autotest_common.sh@1056 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:09:39.291 [2024-12-08 14:04:42.144550] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:39.291 [2024-12-08 14:04:42.144684] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:40.234 14:04:43 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:40.234 14:04:43 -- common/autotest_common.sh@1061 -- # [[ -e /proc/63472 ]] 00:09:40.234 14:04:43 -- common/autotest_common.sh@1062 -- # sleep 1s 00:09:40.495 [2024-12-08 14:04:43.207262] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:40.756 [2024-12-08 14:04:43.479426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:40.756 [2024-12-08 14:04:43.479845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:40.756 [2024-12-08 14:04:43.479971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:40.756 [2024-12-08 14:04:43.503109] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:40.756 [2024-12-08 14:04:43.517826] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:09:40.756 [2024-12-08 14:04:43.518069] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:09:40.756 [2024-12-08 14:04:43.531559] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:40.756 [2024-12-08 14:04:43.531773] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:09:40.756 [2024-12-08 14:04:43.531932] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:09:40.756 [2024-12-08 14:04:43.541611] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:40.756 [2024-12-08 14:04:43.541818] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:09:40.756 [2024-12-08 14:04:43.541966] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:09:40.756 [2024-12-08 14:04:43.549596] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:40.756 [2024-12-08 14:04:43.549804] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:09:40.756 [2024-12-08 14:04:43.550052] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:09:40.756 [2024-12-08 14:04:43.550181] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:09:40.756 [2024-12-08 14:04:43.550351] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:09:41.326 done. 00:09:41.326 14:04:44 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:41.326 14:04:44 -- common/autotest_common.sh@1064 -- # echo done. 00:09:41.326 14:04:44 -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:41.326 14:04:44 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:09:41.326 14:04:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:41.326 14:04:44 -- common/autotest_common.sh@10 -- # set +x 00:09:41.326 ************************************ 00:09:41.326 START TEST nvme_reset 00:09:41.326 ************************************ 00:09:41.326 14:04:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:41.586 Initializing NVMe Controllers 00:09:41.586 Skipping QEMU NVMe SSD at 0000:00:06.0 00:09:41.586 Skipping QEMU NVMe SSD at 0000:00:07.0 00:09:41.586 Skipping QEMU NVMe SSD at 0000:00:09.0 00:09:41.586 Skipping QEMU NVMe SSD at 0000:00:08.0 00:09:41.586 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:09:41.586 00:09:41.586 real 0m0.233s 00:09:41.586 user 0m0.062s 00:09:41.586 sys 0m0.122s 00:09:41.586 14:04:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:41.586 14:04:44 -- common/autotest_common.sh@10 -- # set +x 00:09:41.586 ************************************ 00:09:41.586 END TEST nvme_reset 00:09:41.586 ************************************ 00:09:41.586 14:04:44 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:09:41.586 14:04:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:41.586 14:04:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:41.586 14:04:44 -- common/autotest_common.sh@10 -- # set +x 00:09:41.586 ************************************ 00:09:41.586 START TEST nvme_identify 00:09:41.586 ************************************ 00:09:41.586 14:04:44 -- common/autotest_common.sh@1114 -- # nvme_identify 00:09:41.586 14:04:44 -- nvme/nvme.sh@12 -- # bdfs=() 00:09:41.586 14:04:44 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:09:41.586 14:04:44 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:09:41.586 14:04:44 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:09:41.586 14:04:44 -- common/autotest_common.sh@1508 -- # bdfs=() 00:09:41.586 14:04:44 -- common/autotest_common.sh@1508 -- # local bdfs 00:09:41.586 14:04:44 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:41.586 14:04:44 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:41.587 14:04:44 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:09:41.587 14:04:44 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:09:41.587 14:04:44 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:09:41.587 14:04:44 -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:09:41.850 ===================================================== 00:09:41.850 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:41.850 ===================================================== 00:09:41.850 Controller Capabilities/Features 00:09:41.850 ================================ 00:09:41.850 Vendor ID: 1b36 00:09:41.850 Subsystem Vendor ID: 1af4 00:09:41.850 Serial Number: 12340 00:09:41.850 Model Number: QEMU NVMe Ctrl 00:09:41.850 Firmware Version: 8.0.0 00:09:41.850 Recommended Arb Burst: 6 00:09:41.850 IEEE OUI Identifier: 00 54 52 00:09:41.850 Multi-path I/O 00:09:41.850 May have multiple subsystem ports: No 00:09:41.850 May have multiple controllers: No 00:09:41.850 Associated with SR-IOV VF: No 00:09:41.850 Max Data Transfer Size: 524288 00:09:41.850 Max Number of Namespaces: 256 00:09:41.850 Max Number of I/O Queues: 64 00:09:41.850 NVMe Specification Version (VS): 1.4 00:09:41.850 NVMe Specification Version (Identify): 1.4 00:09:41.850 Maximum Queue Entries: 2048 00:09:41.850 Contiguous Queues Required: Yes 00:09:41.850 Arbitration Mechanisms Supported 00:09:41.850 Weighted Round Robin: Not Supported 00:09:41.850 Vendor Specific: Not Supported 00:09:41.850 Reset Timeout: 7500 ms 00:09:41.850 Doorbell Stride: 4 bytes 00:09:41.850 NVM Subsystem Reset: Not Supported 00:09:41.850 Command Sets Supported 00:09:41.850 NVM Command Set: Supported 00:09:41.850 Boot Partition: Not Supported 00:09:41.850 Memory Page Size Minimum: 4096 bytes 00:09:41.850 Memory Page Size Maximum: 65536 bytes 00:09:41.850 Persistent Memory Region: Not Supported 00:09:41.850 Optional Asynchronous Events Supported 00:09:41.850 Namespace Attribute Notices: Supported 00:09:41.850 Firmware Activation Notices: Not Supported 00:09:41.850 ANA Change Notices: Not Supported 00:09:41.851 PLE Aggregate Log Change Notices: Not Supported 00:09:41.851 LBA Status Info Alert Notices: Not Supported 00:09:41.851 EGE Aggregate Log Change Notices: Not Supported 00:09:41.851 Normal NVM Subsystem Shutdown event: Not Supported 00:09:41.851 Zone Descriptor Change Notices: Not Supported 00:09:41.851 Discovery Log Change Notices: Not Supported 00:09:41.851 Controller Attributes 00:09:41.851 128-bit Host Identifier: Not Supported 00:09:41.851 Non-Operational Permissive Mode: Not Supported 00:09:41.851 NVM Sets: Not Supported 00:09:41.851 Read Recovery Levels: Not Supported 00:09:41.851 Endurance Groups: Not Supported 00:09:41.851 Predictable Latency Mode: Not Supported 00:09:41.851 Traffic Based Keep ALive: Not Supported 00:09:41.851 Namespace Granularity: Not Supported 00:09:41.851 SQ Associations: Not Supported 00:09:41.851 UUID List: Not Supported 00:09:41.851 Multi-Domain Subsystem: Not Supported 00:09:41.851 Fixed Capacity Management: Not Supported 00:09:41.851 Variable Capacity Management: Not Supported 00:09:41.851 Delete Endurance Group: Not Supported 00:09:41.851 Delete NVM Set: Not Supported 00:09:41.851 Extended LBA Formats Supported: Supported 00:09:41.851 Flexible Data Placement Supported: Not Supported 00:09:41.851 00:09:41.851 Controller Memory Buffer Support 00:09:41.851 ================================ 00:09:41.851 Supported: No 00:09:41.851 00:09:41.851 Persistent Memory Region Support 00:09:41.851 ================================ 00:09:41.851 Supported: No 00:09:41.851 00:09:41.851 Admin Command Set Attributes 00:09:41.851 ============================ 00:09:41.851 Security Send/Receive: Not Supported 00:09:41.851 Format NVM: Supported 00:09:41.851 Firmware Activate/Download: Not Supported 00:09:41.851 Namespace Management: Supported 00:09:41.851 Device Self-Test: Not Supported 00:09:41.851 Directives: Supported 00:09:41.851 NVMe-MI: Not Supported 00:09:41.851 Virtualization Management: Not Supported 00:09:41.851 Doorbell Buffer Config: Supported 00:09:41.851 Get LBA Status Capability: Not Supported 00:09:41.851 Command & Feature Lockdown Capability: Not Supported 00:09:41.851 Abort Command Limit: 4 00:09:41.851 Async Event Request Limit: 4 00:09:41.851 Number of Firmware Slots: N/A 00:09:41.851 Firmware Slot 1 Read-Only: N/A 00:09:41.851 Firmware Activation Without Reset: N/A 00:09:41.851 Multiple Update Detection Support: N/A 00:09:41.851 Firmware Update Granularity: No Information Provided 00:09:41.851 Per-Namespace SMART Log: Yes 00:09:41.851 Asymmetric Namespace Access Log Page: Not Supported 00:09:41.851 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:41.851 Command Effects Log Page: Supported 00:09:41.851 Get Log Page Extended Data: Supported 00:09:41.851 Telemetry Log Pages: Not Supported 00:09:41.851 Persistent Event Log Pages: Not Supported 00:09:41.851 Supported Log Pages Log Page: May Support 00:09:41.851 Commands Supported & Effects Log Page: Not Supported 00:09:41.851 Feature Identifiers & Effects Log Page:May Support 00:09:41.851 NVMe-MI Commands & Effects Log Page: May Support 00:09:41.851 Data Area 4 for Telemetry Log: Not Supported 00:09:41.851 Error Log Page Entries Supported: 1 00:09:41.851 Keep Alive: Not Supported 00:09:41.851 00:09:41.851 NVM Command Set Attributes 00:09:41.851 ========================== 00:09:41.851 Submission Queue Entry Size 00:09:41.851 Max: 64 00:09:41.851 Min: 64 00:09:41.851 Completion Queue Entry Size 00:09:41.851 Max: 16 00:09:41.851 Min: 16 00:09:41.851 Number of Namespaces: 256 00:09:41.851 Compare Command: Supported 00:09:41.851 Write Uncorrectable Command: Not Supported 00:09:41.851 Dataset Management Command: Supported 00:09:41.851 Write Zeroes Command: Supported 00:09:41.851 Set Features Save Field: Supported 00:09:41.851 Reservations: Not Supported 00:09:41.851 Timestamp: Supported 00:09:41.851 Copy: Supported 00:09:41.851 Volatile Write Cache: Present 00:09:41.851 Atomic Write Unit (Normal): 1 00:09:41.851 Atomic Write Unit (PFail): 1 00:09:41.851 Atomic Compare & Write Unit: 1 00:09:41.851 Fused Compare & Write: Not Supported 00:09:41.851 Scatter-Gather List 00:09:41.851 SGL Command Set: Supported 00:09:41.851 SGL Keyed: Not Supported 00:09:41.851 SGL Bit Bucket Descriptor: Not Supported 00:09:41.851 SGL Metadata Pointer: Not Supported 00:09:41.851 Oversized SGL: Not Supported 00:09:41.851 SGL Metadata Address: Not Supported 00:09:41.851 SGL Offset: Not Supported 00:09:41.851 Transport SGL Data Block: Not Supported 00:09:41.851 Replay Protected Memory Block: Not Supported 00:09:41.851 00:09:41.851 Firmware Slot Information 00:09:41.851 ========================= 00:09:41.851 Active slot: 1 00:09:41.851 Slot 1 Firmware Revision: 1.0 00:09:41.851 00:09:41.851 00:09:41.851 Commands Supported and Effects 00:09:41.851 ============================== 00:09:41.851 Admin Commands 00:09:41.851 -------------- 00:09:41.851 Delete I/O Submission Queue (00h): Supported 00:09:41.851 Create I/O Submission Queue (01h): Supported 00:09:41.851 Get Log Page (02h): Supported 00:09:41.851 Delete I/O Completion Queue (04h): Supported 00:09:41.851 Create I/O Completion Queue (05h): Supported 00:09:41.851 Identify (06h): Supported 00:09:41.851 Abort (08h): Supported 00:09:41.851 Set Features (09h): Supported 00:09:41.851 Get Features (0Ah): Supported 00:09:41.851 Asynchronous Event Request (0Ch): Supported 00:09:41.851 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:41.851 Directive Send (19h): Supported 00:09:41.851 Directive Receive (1Ah): Supported 00:09:41.851 Virtualization Management (1Ch): Supported 00:09:41.851 Doorbell Buffer Config (7Ch): Supported 00:09:41.851 Format NVM (80h): Supported LBA-Change 00:09:41.851 I/O Commands 00:09:41.851 ------------ 00:09:41.851 Flush (00h): Supported LBA-Change 00:09:41.851 Write (01h): Supported LBA-Change 00:09:41.851 Read (02h): Supported 00:09:41.851 Compare (05h): Supported 00:09:41.851 Write Zeroes (08h): Supported LBA-Change 00:09:41.851 Dataset Management (09h): Supported LBA-Change 00:09:41.851 Unknown (0Ch): Supported 00:09:41.851 Unknown (12h): Supported 00:09:41.851 Copy (19h): Supported LBA-Change 00:09:41.851 Unknown (1Dh): Supported LBA-Change 00:09:41.851 00:09:41.851 Error Log 00:09:41.851 ========= 00:09:41.851 00:09:41.851 Arbitration 00:09:41.851 =========== 00:09:41.851 Arbitration Burst: no limit 00:09:41.851 00:09:41.851 Power Management 00:09:41.851 ================ 00:09:41.851 Number of Power States: 1 00:09:41.851 Current Power State: Power State #0 00:09:41.851 Power State #0: 00:09:41.851 Max Power: 25.00 W 00:09:41.851 Non-Operational State: Operational 00:09:41.851 Entry Latency: 16 microseconds 00:09:41.851 Exit Latency: 4 microseconds 00:09:41.851 Relative Read Throughput: 0 00:09:41.851 Relative Read Latency: 0 00:09:41.851 Relative Write Throughput: 0 00:09:41.851 Relative Write Latency: 0 00:09:41.851 Idle Power[2024-12-08 14:04:44.689700] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:06.0] process 63514 terminated unexpected 00:09:41.851 [2024-12-08 14:04:44.691597] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:07.0] process 63514 terminated unexpected 00:09:41.851 : Not Reported 00:09:41.851 Active Power: Not Reported 00:09:41.851 Non-Operational Permissive Mode: Not Supported 00:09:41.851 00:09:41.851 Health Information 00:09:41.851 ================== 00:09:41.851 Critical Warnings: 00:09:41.851 Available Spare Space: OK 00:09:41.851 Temperature: OK 00:09:41.851 Device Reliability: OK 00:09:41.851 Read Only: No 00:09:41.851 Volatile Memory Backup: OK 00:09:41.851 Current Temperature: 323 Kelvin (50 Celsius) 00:09:41.851 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:41.851 Available Spare: 0% 00:09:41.851 Available Spare Threshold: 0% 00:09:41.851 Life Percentage Used: 0% 00:09:41.851 Data Units Read: 2043 00:09:41.851 Data Units Written: 945 00:09:41.851 Host Read Commands: 97740 00:09:41.851 Host Write Commands: 48623 00:09:41.851 Controller Busy Time: 0 minutes 00:09:41.851 Power Cycles: 0 00:09:41.851 Power On Hours: 0 hours 00:09:41.851 Unsafe Shutdowns: 0 00:09:41.851 Unrecoverable Media Errors: 0 00:09:41.851 Lifetime Error Log Entries: 0 00:09:41.851 Warning Temperature Time: 0 minutes 00:09:41.851 Critical Temperature Time: 0 minutes 00:09:41.851 00:09:41.851 Number of Queues 00:09:41.851 ================ 00:09:41.851 Number of I/O Submission Queues: 64 00:09:41.851 Number of I/O Completion Queues: 64 00:09:41.851 00:09:41.851 ZNS Specific Controller Data 00:09:41.851 ============================ 00:09:41.851 Zone Append Size Limit: 0 00:09:41.851 00:09:41.851 00:09:41.851 Active Namespaces 00:09:41.851 ================= 00:09:41.851 Namespace ID:1 00:09:41.852 Error Recovery Timeout: Unlimited 00:09:41.852 Command Set Identifier: NVM (00h) 00:09:41.852 Deallocate: Supported 00:09:41.852 Deallocated/Unwritten Error: Supported 00:09:41.852 Deallocated Read Value: All 0x00 00:09:41.852 Deallocate in Write Zeroes: Not Supported 00:09:41.852 Deallocated Guard Field: 0xFFFF 00:09:41.852 Flush: Supported 00:09:41.852 Reservation: Not Supported 00:09:41.852 Metadata Transferred as: Separate Metadata Buffer 00:09:41.852 Namespace Sharing Capabilities: Private 00:09:41.852 Size (in LBAs): 1548666 (5GiB) 00:09:41.852 Capacity (in LBAs): 1548666 (5GiB) 00:09:41.852 Utilization (in LBAs): 1548666 (5GiB) 00:09:41.852 Thin Provisioning: Not Supported 00:09:41.852 Per-NS Atomic Units: No 00:09:41.852 Maximum Single Source Range Length: 128 00:09:41.852 Maximum Copy Length: 128 00:09:41.852 Maximum Source Range Count: 128 00:09:41.852 NGUID/EUI64 Never Reused: No 00:09:41.852 Namespace Write Protected: No 00:09:41.852 Number of LBA Formats: 8 00:09:41.852 Current LBA Format: LBA Format #07 00:09:41.852 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:41.852 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:41.852 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:41.852 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:41.852 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:41.852 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:41.852 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:41.852 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:41.852 00:09:41.852 ===================================================== 00:09:41.852 NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:41.852 ===================================================== 00:09:41.852 Controller Capabilities/Features 00:09:41.852 ================================ 00:09:41.852 Vendor ID: 1b36 00:09:41.852 Subsystem Vendor ID: 1af4 00:09:41.852 Serial Number: 12341 00:09:41.852 Model Number: QEMU NVMe Ctrl 00:09:41.852 Firmware Version: 8.0.0 00:09:41.852 Recommended Arb Burst: 6 00:09:41.852 IEEE OUI Identifier: 00 54 52 00:09:41.852 Multi-path I/O 00:09:41.852 May have multiple subsystem ports: No 00:09:41.852 May have multiple controllers: No 00:09:41.852 Associated with SR-IOV VF: No 00:09:41.852 Max Data Transfer Size: 524288 00:09:41.852 Max Number of Namespaces: 256 00:09:41.852 Max Number of I/O Queues: 64 00:09:41.852 NVMe Specification Version (VS): 1.4 00:09:41.852 NVMe Specification Version (Identify): 1.4 00:09:41.852 Maximum Queue Entries: 2048 00:09:41.852 Contiguous Queues Required: Yes 00:09:41.852 Arbitration Mechanisms Supported 00:09:41.852 Weighted Round Robin: Not Supported 00:09:41.852 Vendor Specific: Not Supported 00:09:41.852 Reset Timeout: 7500 ms 00:09:41.852 Doorbell Stride: 4 bytes 00:09:41.852 NVM Subsystem Reset: Not Supported 00:09:41.852 Command Sets Supported 00:09:41.852 NVM Command Set: Supported 00:09:41.852 Boot Partition: Not Supported 00:09:41.852 Memory Page Size Minimum: 4096 bytes 00:09:41.852 Memory Page Size Maximum: 65536 bytes 00:09:41.852 Persistent Memory Region: Not Supported 00:09:41.852 Optional Asynchronous Events Supported 00:09:41.852 Namespace Attribute Notices: Supported 00:09:41.852 Firmware Activation Notices: Not Supported 00:09:41.852 ANA Change Notices: Not Supported 00:09:41.852 PLE Aggregate Log Change Notices: Not Supported 00:09:41.852 LBA Status Info Alert Notices: Not Supported 00:09:41.852 EGE Aggregate Log Change Notices: Not Supported 00:09:41.852 Normal NVM Subsystem Shutdown event: Not Supported 00:09:41.852 Zone Descriptor Change Notices: Not Supported 00:09:41.852 Discovery Log Change Notices: Not Supported 00:09:41.852 Controller Attributes 00:09:41.852 128-bit Host Identifier: Not Supported 00:09:41.852 Non-Operational Permissive Mode: Not Supported 00:09:41.852 NVM Sets: Not Supported 00:09:41.852 Read Recovery Levels: Not Supported 00:09:41.852 Endurance Groups: Not Supported 00:09:41.852 Predictable Latency Mode: Not Supported 00:09:41.852 Traffic Based Keep ALive: Not Supported 00:09:41.852 Namespace Granularity: Not Supported 00:09:41.852 SQ Associations: Not Supported 00:09:41.852 UUID List: Not Supported 00:09:41.852 Multi-Domain Subsystem: Not Supported 00:09:41.852 Fixed Capacity Management: Not Supported 00:09:41.852 Variable Capacity Management: Not Supported 00:09:41.852 Delete Endurance Group: Not Supported 00:09:41.852 Delete NVM Set: Not Supported 00:09:41.852 Extended LBA Formats Supported: Supported 00:09:41.852 Flexible Data Placement Supported: Not Supported 00:09:41.852 00:09:41.852 Controller Memory Buffer Support 00:09:41.852 ================================ 00:09:41.852 Supported: No 00:09:41.852 00:09:41.852 Persistent Memory Region Support 00:09:41.852 ================================ 00:09:41.852 Supported: No 00:09:41.852 00:09:41.852 Admin Command Set Attributes 00:09:41.852 ============================ 00:09:41.852 Security Send/Receive: Not Supported 00:09:41.852 Format NVM: Supported 00:09:41.852 Firmware Activate/Download: Not Supported 00:09:41.852 Namespace Management: Supported 00:09:41.852 Device Self-Test: Not Supported 00:09:41.852 Directives: Supported 00:09:41.852 NVMe-MI: Not Supported 00:09:41.852 Virtualization Management: Not Supported 00:09:41.852 Doorbell Buffer Config: Supported 00:09:41.852 Get LBA Status Capability: Not Supported 00:09:41.852 Command & Feature Lockdown Capability: Not Supported 00:09:41.852 Abort Command Limit: 4 00:09:41.852 Async Event Request Limit: 4 00:09:41.852 Number of Firmware Slots: N/A 00:09:41.852 Firmware Slot 1 Read-Only: N/A 00:09:41.852 Firmware Activation Without Reset: N/A 00:09:41.852 Multiple Update Detection Support: N/A 00:09:41.852 Firmware Update Granularity: No Information Provided 00:09:41.852 Per-Namespace SMART Log: Yes 00:09:41.852 Asymmetric Namespace Access Log Page: Not Supported 00:09:41.852 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:41.852 Command Effects Log Page: Supported 00:09:41.852 Get Log Page Extended Data: Supported 00:09:41.852 Telemetry Log Pages: Not Supported 00:09:41.852 Persistent Event Log Pages: Not Supported 00:09:41.852 Supported Log Pages Log Page: May Support 00:09:41.852 Commands Supported & Effects Log Page: Not Supported 00:09:41.852 Feature Identifiers & Effects Log Page:May Support 00:09:41.852 NVMe-MI Commands & Effects Log Page: May Support 00:09:41.852 Data Area 4 for Telemetry Log: Not Supported 00:09:41.852 Error Log Page Entries Supported: 1 00:09:41.852 Keep Alive: Not Supported 00:09:41.852 00:09:41.852 NVM Command Set Attributes 00:09:41.852 ========================== 00:09:41.852 Submission Queue Entry Size 00:09:41.852 Max: 64 00:09:41.852 Min: 64 00:09:41.852 Completion Queue Entry Size 00:09:41.852 Max: 16 00:09:41.852 Min: 16 00:09:41.852 Number of Namespaces: 256 00:09:41.852 Compare Command: Supported 00:09:41.852 Write Uncorrectable Command: Not Supported 00:09:41.852 Dataset Management Command: Supported 00:09:41.852 Write Zeroes Command: Supported 00:09:41.852 Set Features Save Field: Supported 00:09:41.852 Reservations: Not Supported 00:09:41.852 Timestamp: Supported 00:09:41.852 Copy: Supported 00:09:41.852 Volatile Write Cache: Present 00:09:41.852 Atomic Write Unit (Normal): 1 00:09:41.852 Atomic Write Unit (PFail): 1 00:09:41.852 Atomic Compare & Write Unit: 1 00:09:41.852 Fused Compare & Write: Not Supported 00:09:41.852 Scatter-Gather List 00:09:41.852 SGL Command Set: Supported 00:09:41.852 SGL Keyed: Not Supported 00:09:41.852 SGL Bit Bucket Descriptor: Not Supported 00:09:41.852 SGL Metadata Pointer: Not Supported 00:09:41.852 Oversized SGL: Not Supported 00:09:41.852 SGL Metadata Address: Not Supported 00:09:41.852 SGL Offset: Not Supported 00:09:41.852 Transport SGL Data Block: Not Supported 00:09:41.852 Replay Protected Memory Block: Not Supported 00:09:41.852 00:09:41.852 Firmware Slot Information 00:09:41.852 ========================= 00:09:41.852 Active slot: 1 00:09:41.852 Slot 1 Firmware Revision: 1.0 00:09:41.852 00:09:41.852 00:09:41.852 Commands Supported and Effects 00:09:41.852 ============================== 00:09:41.852 Admin Commands 00:09:41.852 -------------- 00:09:41.852 Delete I/O Submission Queue (00h): Supported 00:09:41.852 Create I/O Submission Queue (01h): Supported 00:09:41.852 Get Log Page (02h): Supported 00:09:41.852 Delete I/O Completion Queue (04h): Supported 00:09:41.852 Create I/O Completion Queue (05h): Supported 00:09:41.852 Identify (06h): Supported 00:09:41.852 Abort (08h): Supported 00:09:41.852 Set Features (09h): Supported 00:09:41.852 Get Features (0Ah): Supported 00:09:41.852 Asynchronous Event Request (0Ch): Supported 00:09:41.852 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:41.852 Directive Send (19h): Supported 00:09:41.852 Directive Receive (1Ah): Supported 00:09:41.852 Virtualization Management (1Ch): Supported 00:09:41.853 Doorbell Buffer Config (7Ch): Supported 00:09:41.853 Format NVM (80h): Supported LBA-Change 00:09:41.853 I/O Commands 00:09:41.853 ------------ 00:09:41.853 Flush (00h): Supported LBA-Change 00:09:41.853 Write (01h): Supported LBA-Change 00:09:41.853 Read (02h): Supported 00:09:41.853 Compare (05h): Supported 00:09:41.853 Write Zeroes (08h): Supported LBA-Change 00:09:41.853 Dataset Management (09h): Supported LBA-Change 00:09:41.853 Unknown (0Ch): Supported 00:09:41.853 Unknown (12h): Supported 00:09:41.853 Copy (19h): Supported LBA-Change 00:09:41.853 Unknown (1Dh): Supported LBA-Change 00:09:41.853 00:09:41.853 Error Log 00:09:41.853 ========= 00:09:41.853 00:09:41.853 Arbitration 00:09:41.853 =========== 00:09:41.853 Arbitration Burst: no limit 00:09:41.853 00:09:41.853 Power Management 00:09:41.853 ================ 00:09:41.853 Number of Power States: 1 00:09:41.853 Current Power State: Power State #0 00:09:41.853 Power State #0: 00:09:41.853 Max Power: 25.00 W 00:09:41.853 Non-Operational State: Operational 00:09:41.853 Entry Latency: 16 microseconds 00:09:41.853 Exit Latency: 4 microseconds 00:09:41.853 Relative Read Throughput: 0 00:09:41.853 Relative Read Latency: 0 00:09:41.853 Relative Write Throughput: 0 00:09:41.853 Relative Write Latency: 0 00:09:41.853 Idle Power: Not Reported 00:09:41.853 Active Power: Not Reported 00:09:41.853 Non-Operational Permissive Mode: Not Supported 00:09:41.853 00:09:41.853 Health Information 00:09:41.853 ================== 00:09:41.853 Critical Warnings: 00:09:41.853 Available Spare Space: OK 00:09:41.853 Temperature: OK 00:09:41.853 Device Reliability: OK 00:09:41.853 Read Only: No 00:09:41.853 Volatile Memory Backup: OK 00:09:41.853 Current Temperature: 323 Kelvin (50 Celsius) 00:09:41.853 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:41.853 Available Spare: 0% 00:09:41.853 Available Spare Threshold: 0% 00:09:41.853 Life Percentage Used: 0% 00:09:41.853 Data Units Read: 1367 00:09:41.853 Data Units Written: 632 00:09:41.853 Host Read Commands: 62982 00:09:41.853 Host Write Commands: 30963 00:09:41.853 Controller Busy Time: 0 minutes 00:09:41.853 Power Cycles: 0 00:09:41.853 Power On Hours: 0 hours 00:09:41.853 Unsafe Shutdowns: 0 00:09:41.853 Unrecoverable Media Errors: 0 00:09:41.853 Lifetime Error Log Entries: 0 00:09:41.853 Warning Temperature Time: 0 minutes 00:09:41.853 Critical Temperature Time: 0 minutes 00:09:41.853 00:09:41.853 Number of Queues 00:09:41.853 ================ 00:09:41.853 Number of I/O Submission Queues: 64 00:09:41.853 Number of I/O Completion Queues: 64 00:09:41.853 00:09:41.853 ZNS Specific Controller Data 00:09:41.853 ============================ 00:09:41.853 Zone Append Size Limit: 0 00:09:41.853 00:09:41.853 00:09:41.853 Active Namespaces 00:09:41.853 ================= 00:09:41.853 Namespace ID:1 00:09:41.853 Error Recovery Timeout: Unlimited 00:09:41.853 Command Set Identifier: [2024-12-08 14:04:44.693471] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:09.0] process 63514 terminated unexpected 00:09:41.853 NVM (00h) 00:09:41.853 Deallocate: Supported 00:09:41.853 Deallocated/Unwritten Error: Supported 00:09:41.853 Deallocated Read Value: All 0x00 00:09:41.853 Deallocate in Write Zeroes: Not Supported 00:09:41.853 Deallocated Guard Field: 0xFFFF 00:09:41.853 Flush: Supported 00:09:41.853 Reservation: Not Supported 00:09:41.853 Namespace Sharing Capabilities: Private 00:09:41.853 Size (in LBAs): 1310720 (5GiB) 00:09:41.853 Capacity (in LBAs): 1310720 (5GiB) 00:09:41.853 Utilization (in LBAs): 1310720 (5GiB) 00:09:41.853 Thin Provisioning: Not Supported 00:09:41.853 Per-NS Atomic Units: No 00:09:41.853 Maximum Single Source Range Length: 128 00:09:41.853 Maximum Copy Length: 128 00:09:41.853 Maximum Source Range Count: 128 00:09:41.853 NGUID/EUI64 Never Reused: No 00:09:41.853 Namespace Write Protected: No 00:09:41.853 Number of LBA Formats: 8 00:09:41.853 Current LBA Format: LBA Format #04 00:09:41.853 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:41.853 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:41.853 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:41.853 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:41.853 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:41.853 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:41.853 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:41.853 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:41.853 00:09:41.853 ===================================================== 00:09:41.853 NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:41.853 ===================================================== 00:09:41.853 Controller Capabilities/Features 00:09:41.853 ================================ 00:09:41.853 Vendor ID: 1b36 00:09:41.853 Subsystem Vendor ID: 1af4 00:09:41.853 Serial Number: 12343 00:09:41.853 Model Number: QEMU NVMe Ctrl 00:09:41.853 Firmware Version: 8.0.0 00:09:41.853 Recommended Arb Burst: 6 00:09:41.853 IEEE OUI Identifier: 00 54 52 00:09:41.853 Multi-path I/O 00:09:41.853 May have multiple subsystem ports: No 00:09:41.853 May have multiple controllers: Yes 00:09:41.853 Associated with SR-IOV VF: No 00:09:41.853 Max Data Transfer Size: 524288 00:09:41.853 Max Number of Namespaces: 256 00:09:41.853 Max Number of I/O Queues: 64 00:09:41.853 NVMe Specification Version (VS): 1.4 00:09:41.853 NVMe Specification Version (Identify): 1.4 00:09:41.853 Maximum Queue Entries: 2048 00:09:41.853 Contiguous Queues Required: Yes 00:09:41.853 Arbitration Mechanisms Supported 00:09:41.853 Weighted Round Robin: Not Supported 00:09:41.853 Vendor Specific: Not Supported 00:09:41.853 Reset Timeout: 7500 ms 00:09:41.853 Doorbell Stride: 4 bytes 00:09:41.853 NVM Subsystem Reset: Not Supported 00:09:41.853 Command Sets Supported 00:09:41.853 NVM Command Set: Supported 00:09:41.853 Boot Partition: Not Supported 00:09:41.853 Memory Page Size Minimum: 4096 bytes 00:09:41.853 Memory Page Size Maximum: 65536 bytes 00:09:41.853 Persistent Memory Region: Not Supported 00:09:41.853 Optional Asynchronous Events Supported 00:09:41.853 Namespace Attribute Notices: Supported 00:09:41.853 Firmware Activation Notices: Not Supported 00:09:41.853 ANA Change Notices: Not Supported 00:09:41.853 PLE Aggregate Log Change Notices: Not Supported 00:09:41.853 LBA Status Info Alert Notices: Not Supported 00:09:41.853 EGE Aggregate Log Change Notices: Not Supported 00:09:41.853 Normal NVM Subsystem Shutdown event: Not Supported 00:09:41.853 Zone Descriptor Change Notices: Not Supported 00:09:41.853 Discovery Log Change Notices: Not Supported 00:09:41.853 Controller Attributes 00:09:41.853 128-bit Host Identifier: Not Supported 00:09:41.853 Non-Operational Permissive Mode: Not Supported 00:09:41.853 NVM Sets: Not Supported 00:09:41.853 Read Recovery Levels: Not Supported 00:09:41.853 Endurance Groups: Supported 00:09:41.853 Predictable Latency Mode: Not Supported 00:09:41.853 Traffic Based Keep ALive: Not Supported 00:09:41.853 Namespace Granularity: Not Supported 00:09:41.853 SQ Associations: Not Supported 00:09:41.853 UUID List: Not Supported 00:09:41.853 Multi-Domain Subsystem: Not Supported 00:09:41.853 Fixed Capacity Management: Not Supported 00:09:41.853 Variable Capacity Management: Not Supported 00:09:41.853 Delete Endurance Group: Not Supported 00:09:41.853 Delete NVM Set: Not Supported 00:09:41.853 Extended LBA Formats Supported: Supported 00:09:41.853 Flexible Data Placement Supported: Supported 00:09:41.853 00:09:41.853 Controller Memory Buffer Support 00:09:41.853 ================================ 00:09:41.853 Supported: No 00:09:41.853 00:09:41.853 Persistent Memory Region Support 00:09:41.853 ================================ 00:09:41.853 Supported: No 00:09:41.853 00:09:41.853 Admin Command Set Attributes 00:09:41.853 ============================ 00:09:41.853 Security Send/Receive: Not Supported 00:09:41.853 Format NVM: Supported 00:09:41.853 Firmware Activate/Download: Not Supported 00:09:41.853 Namespace Management: Supported 00:09:41.853 Device Self-Test: Not Supported 00:09:41.853 Directives: Supported 00:09:41.853 NVMe-MI: Not Supported 00:09:41.853 Virtualization Management: Not Supported 00:09:41.853 Doorbell Buffer Config: Supported 00:09:41.853 Get LBA Status Capability: Not Supported 00:09:41.853 Command & Feature Lockdown Capability: Not Supported 00:09:41.853 Abort Command Limit: 4 00:09:41.853 Async Event Request Limit: 4 00:09:41.853 Number of Firmware Slots: N/A 00:09:41.853 Firmware Slot 1 Read-Only: N/A 00:09:41.853 Firmware Activation Without Reset: N/A 00:09:41.853 Multiple Update Detection Support: N/A 00:09:41.854 Firmware Update Granularity: No Information Provided 00:09:41.854 Per-Namespace SMART Log: Yes 00:09:41.854 Asymmetric Namespace Access Log Page: Not Supported 00:09:41.854 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:41.854 Command Effects Log Page: Supported 00:09:41.854 Get Log Page Extended Data: Supported 00:09:41.854 Telemetry Log Pages: Not Supported 00:09:41.854 Persistent Event Log Pages: Not Supported 00:09:41.854 Supported Log Pages Log Page: May Support 00:09:41.854 Commands Supported & Effects Log Page: Not Supported 00:09:41.854 Feature Identifiers & Effects Log Page:May Support 00:09:41.854 NVMe-MI Commands & Effects Log Page: May Support 00:09:41.854 Data Area 4 for Telemetry Log: Not Supported 00:09:41.854 Error Log Page Entries Supported: 1 00:09:41.854 Keep Alive: Not Supported 00:09:41.854 00:09:41.854 NVM Command Set Attributes 00:09:41.854 ========================== 00:09:41.854 Submission Queue Entry Size 00:09:41.854 Max: 64 00:09:41.854 Min: 64 00:09:41.854 Completion Queue Entry Size 00:09:41.854 Max: 16 00:09:41.854 Min: 16 00:09:41.854 Number of Namespaces: 256 00:09:41.854 Compare Command: Supported 00:09:41.854 Write Uncorrectable Command: Not Supported 00:09:41.854 Dataset Management Command: Supported 00:09:41.854 Write Zeroes Command: Supported 00:09:41.854 Set Features Save Field: Supported 00:09:41.854 Reservations: Not Supported 00:09:41.854 Timestamp: Supported 00:09:41.854 Copy: Supported 00:09:41.854 Volatile Write Cache: Present 00:09:41.854 Atomic Write Unit (Normal): 1 00:09:41.854 Atomic Write Unit (PFail): 1 00:09:41.854 Atomic Compare & Write Unit: 1 00:09:41.854 Fused Compare & Write: Not Supported 00:09:41.854 Scatter-Gather List 00:09:41.854 SGL Command Set: Supported 00:09:41.854 SGL Keyed: Not Supported 00:09:41.854 SGL Bit Bucket Descriptor: Not Supported 00:09:41.854 SGL Metadata Pointer: Not Supported 00:09:41.854 Oversized SGL: Not Supported 00:09:41.854 SGL Metadata Address: Not Supported 00:09:41.854 SGL Offset: Not Supported 00:09:41.854 Transport SGL Data Block: Not Supported 00:09:41.854 Replay Protected Memory Block: Not Supported 00:09:41.854 00:09:41.854 Firmware Slot Information 00:09:41.854 ========================= 00:09:41.854 Active slot: 1 00:09:41.854 Slot 1 Firmware Revision: 1.0 00:09:41.854 00:09:41.854 00:09:41.854 Commands Supported and Effects 00:09:41.854 ============================== 00:09:41.854 Admin Commands 00:09:41.854 -------------- 00:09:41.854 Delete I/O Submission Queue (00h): Supported 00:09:41.854 Create I/O Submission Queue (01h): Supported 00:09:41.854 Get Log Page (02h): Supported 00:09:41.854 Delete I/O Completion Queue (04h): Supported 00:09:41.854 Create I/O Completion Queue (05h): Supported 00:09:41.854 Identify (06h): Supported 00:09:41.854 Abort (08h): Supported 00:09:41.854 Set Features (09h): Supported 00:09:41.854 Get Features (0Ah): Supported 00:09:41.854 Asynchronous Event Request (0Ch): Supported 00:09:41.854 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:41.854 Directive Send (19h): Supported 00:09:41.854 Directive Receive (1Ah): Supported 00:09:41.854 Virtualization Management (1Ch): Supported 00:09:41.854 Doorbell Buffer Config (7Ch): Supported 00:09:41.854 Format NVM (80h): Supported LBA-Change 00:09:41.854 I/O Commands 00:09:41.854 ------------ 00:09:41.854 Flush (00h): Supported LBA-Change 00:09:41.854 Write (01h): Supported LBA-Change 00:09:41.854 Read (02h): Supported 00:09:41.854 Compare (05h): Supported 00:09:41.854 Write Zeroes (08h): Supported LBA-Change 00:09:41.854 Dataset Management (09h): Supported LBA-Change 00:09:41.854 Unknown (0Ch): Supported 00:09:41.854 Unknown (12h): Supported 00:09:41.854 Copy (19h): Supported LBA-Change 00:09:41.854 Unknown (1Dh): Supported LBA-Change 00:09:41.854 00:09:41.854 Error Log 00:09:41.854 ========= 00:09:41.854 00:09:41.854 Arbitration 00:09:41.854 =========== 00:09:41.854 Arbitration Burst: no limit 00:09:41.854 00:09:41.854 Power Management 00:09:41.854 ================ 00:09:41.854 Number of Power States: 1 00:09:41.854 Current Power State: Power State #0 00:09:41.854 Power State #0: 00:09:41.854 Max Power: 25.00 W 00:09:41.854 Non-Operational State: Operational 00:09:41.854 Entry Latency: 16 microseconds 00:09:41.854 Exit Latency: 4 microseconds 00:09:41.854 Relative Read Throughput: 0 00:09:41.854 Relative Read Latency: 0 00:09:41.854 Relative Write Throughput: 0 00:09:41.854 Relative Write Latency: 0 00:09:41.854 Idle Power: Not Reported 00:09:41.854 Active Power: Not Reported 00:09:41.854 Non-Operational Permissive Mode: Not Supported 00:09:41.854 00:09:41.854 Health Information 00:09:41.854 ================== 00:09:41.854 Critical Warnings: 00:09:41.854 Available Spare Space: OK 00:09:41.854 Temperature: OK 00:09:41.854 Device Reliability: OK 00:09:41.854 Read Only: No 00:09:41.854 Volatile Memory Backup: OK 00:09:41.854 Current Temperature: 323 Kelvin (50 Celsius) 00:09:41.854 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:41.854 Available Spare: 0% 00:09:41.854 Available Spare Threshold: 0% 00:09:41.854 Life Percentage Used: 0% 00:09:41.854 Data Units Read: 1531 00:09:41.854 Data Units Written: 716 00:09:41.854 Host Read Commands: 64304 00:09:41.854 Host Write Commands: 31639 00:09:41.854 Controller Busy Time: 0 minutes 00:09:41.854 Power Cycles: 0 00:09:41.854 Power On Hours: 0 hours 00:09:41.854 Unsafe Shutdowns: 0 00:09:41.854 Unrecoverable Media Errors: 0 00:09:41.854 Lifetime Error Log Entries: 0 00:09:41.854 Warning Temperature Time: 0 minutes 00:09:41.854 Critical Temperature Time: 0 minutes 00:09:41.854 00:09:41.854 Number of Queues 00:09:41.854 ================ 00:09:41.854 Number of I/O Submission Queues: 64 00:09:41.854 Number of I/O Completion Queues: 64 00:09:41.854 00:09:41.854 ZNS Specific Controller Data 00:09:41.854 ============================ 00:09:41.854 Zone Append Size Limit: 0 00:09:41.854 00:09:41.854 00:09:41.854 Active Namespaces 00:09:41.854 ================= 00:09:41.854 Namespace ID:1 00:09:41.854 Error Recovery Timeout: Unlimited 00:09:41.854 Command Set Identifier: NVM (00h) 00:09:41.854 Deallocate: Supported 00:09:41.854 Deallocated/Unwritten Error: Supported 00:09:41.854 Deallocated Read Value: All 0x00 00:09:41.854 Deallocate in Write Zeroes: Not Supported 00:09:41.854 Deallocated Guard Field: 0xFFFF 00:09:41.854 Flush: Supported 00:09:41.854 Reservation: Not Supported 00:09:41.854 Namespace Sharing Capabilities: Multiple Controllers 00:09:41.854 Size (in LBAs): 262144 (1GiB) 00:09:41.854 Capacity (in LBAs): 262144 (1GiB) 00:09:41.854 Utilization (in LBAs): 262144 (1GiB) 00:09:41.854 Thin Provisioning: Not Supported 00:09:41.854 Per-NS Atomic Units: No 00:09:41.854 Maximum Single Source Range Length: 128 00:09:41.854 Maximum Copy Length: 128 00:09:41.854 Maximum Source Range Count: 128 00:09:41.854 NGUID/EUI64 Never Reused: No 00:09:41.854 Namespace Write Protected: No 00:09:41.854 Endurance group ID: 1 00:09:41.854 Number of LBA Formats: 8 00:09:41.854 Current LBA Format: LBA Format #04 00:09:41.854 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:41.854 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:41.854 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:41.854 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:41.854 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:41.854 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:41.854 LBA Format #06: Data Si[2024-12-08 14:04:44.695904] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:08.0] process 63514 terminated unexpected 00:09:41.854 ze: 4096 Metadata Size: 16 00:09:41.854 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:41.854 00:09:41.854 Get Feature FDP: 00:09:41.854 ================ 00:09:41.854 Enabled: Yes 00:09:41.854 FDP configuration index: 0 00:09:41.854 00:09:41.854 FDP configurations log page 00:09:41.854 =========================== 00:09:41.854 Number of FDP configurations: 1 00:09:41.854 Version: 0 00:09:41.854 Size: 112 00:09:41.854 FDP Configuration Descriptor: 0 00:09:41.854 Descriptor Size: 96 00:09:41.854 Reclaim Group Identifier format: 2 00:09:41.854 FDP Volatile Write Cache: Not Present 00:09:41.854 FDP Configuration: Valid 00:09:41.854 Vendor Specific Size: 0 00:09:41.854 Number of Reclaim Groups: 2 00:09:41.854 Number of Recalim Unit Handles: 8 00:09:41.854 Max Placement Identifiers: 128 00:09:41.854 Number of Namespaces Suppprted: 256 00:09:41.854 Reclaim unit Nominal Size: 6000000 bytes 00:09:41.854 Estimated Reclaim Unit Time Limit: Not Reported 00:09:41.854 RUH Desc #000: RUH Type: Initially Isolated 00:09:41.854 RUH Desc #001: RUH Type: Initially Isolated 00:09:41.854 RUH Desc #002: RUH Type: Initially Isolated 00:09:41.855 RUH Desc #003: RUH Type: Initially Isolated 00:09:41.855 RUH Desc #004: RUH Type: Initially Isolated 00:09:41.855 RUH Desc #005: RUH Type: Initially Isolated 00:09:41.855 RUH Desc #006: RUH Type: Initially Isolated 00:09:41.855 RUH Desc #007: RUH Type: Initially Isolated 00:09:41.855 00:09:41.855 FDP reclaim unit handle usage log page 00:09:41.855 ====================================== 00:09:41.855 Number of Reclaim Unit Handles: 8 00:09:41.855 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:41.855 RUH Usage Desc #001: RUH Attributes: Unused 00:09:41.855 RUH Usage Desc #002: RUH Attributes: Unused 00:09:41.855 RUH Usage Desc #003: RUH Attributes: Unused 00:09:41.855 RUH Usage Desc #004: RUH Attributes: Unused 00:09:41.855 RUH Usage Desc #005: RUH Attributes: Unused 00:09:41.855 RUH Usage Desc #006: RUH Attributes: Unused 00:09:41.855 RUH Usage Desc #007: RUH Attributes: Unused 00:09:41.855 00:09:41.855 FDP statistics log page 00:09:41.855 ======================= 00:09:41.855 Host bytes with metadata written: 474501120 00:09:41.855 Media bytes with metadata written: 474587136 00:09:41.855 Media bytes erased: 0 00:09:41.855 00:09:41.855 FDP events log page 00:09:41.855 =================== 00:09:41.855 Number of FDP events: 0 00:09:41.855 00:09:41.855 ===================================================== 00:09:41.855 NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:41.855 ===================================================== 00:09:41.855 Controller Capabilities/Features 00:09:41.855 ================================ 00:09:41.855 Vendor ID: 1b36 00:09:41.855 Subsystem Vendor ID: 1af4 00:09:41.855 Serial Number: 12342 00:09:41.855 Model Number: QEMU NVMe Ctrl 00:09:41.855 Firmware Version: 8.0.0 00:09:41.855 Recommended Arb Burst: 6 00:09:41.855 IEEE OUI Identifier: 00 54 52 00:09:41.855 Multi-path I/O 00:09:41.855 May have multiple subsystem ports: No 00:09:41.855 May have multiple controllers: No 00:09:41.855 Associated with SR-IOV VF: No 00:09:41.855 Max Data Transfer Size: 524288 00:09:41.855 Max Number of Namespaces: 256 00:09:41.855 Max Number of I/O Queues: 64 00:09:41.855 NVMe Specification Version (VS): 1.4 00:09:41.855 NVMe Specification Version (Identify): 1.4 00:09:41.855 Maximum Queue Entries: 2048 00:09:41.855 Contiguous Queues Required: Yes 00:09:41.855 Arbitration Mechanisms Supported 00:09:41.855 Weighted Round Robin: Not Supported 00:09:41.855 Vendor Specific: Not Supported 00:09:41.855 Reset Timeout: 7500 ms 00:09:41.855 Doorbell Stride: 4 bytes 00:09:41.855 NVM Subsystem Reset: Not Supported 00:09:41.855 Command Sets Supported 00:09:41.855 NVM Command Set: Supported 00:09:41.855 Boot Partition: Not Supported 00:09:41.855 Memory Page Size Minimum: 4096 bytes 00:09:41.855 Memory Page Size Maximum: 65536 bytes 00:09:41.855 Persistent Memory Region: Not Supported 00:09:41.855 Optional Asynchronous Events Supported 00:09:41.855 Namespace Attribute Notices: Supported 00:09:41.855 Firmware Activation Notices: Not Supported 00:09:41.855 ANA Change Notices: Not Supported 00:09:41.855 PLE Aggregate Log Change Notices: Not Supported 00:09:41.855 LBA Status Info Alert Notices: Not Supported 00:09:41.855 EGE Aggregate Log Change Notices: Not Supported 00:09:41.855 Normal NVM Subsystem Shutdown event: Not Supported 00:09:41.855 Zone Descriptor Change Notices: Not Supported 00:09:41.855 Discovery Log Change Notices: Not Supported 00:09:41.855 Controller Attributes 00:09:41.855 128-bit Host Identifier: Not Supported 00:09:41.855 Non-Operational Permissive Mode: Not Supported 00:09:41.855 NVM Sets: Not Supported 00:09:41.855 Read Recovery Levels: Not Supported 00:09:41.855 Endurance Groups: Not Supported 00:09:41.855 Predictable Latency Mode: Not Supported 00:09:41.855 Traffic Based Keep ALive: Not Supported 00:09:41.855 Namespace Granularity: Not Supported 00:09:41.855 SQ Associations: Not Supported 00:09:41.855 UUID List: Not Supported 00:09:41.855 Multi-Domain Subsystem: Not Supported 00:09:41.855 Fixed Capacity Management: Not Supported 00:09:41.855 Variable Capacity Management: Not Supported 00:09:41.855 Delete Endurance Group: Not Supported 00:09:41.855 Delete NVM Set: Not Supported 00:09:41.855 Extended LBA Formats Supported: Supported 00:09:41.855 Flexible Data Placement Supported: Not Supported 00:09:41.855 00:09:41.855 Controller Memory Buffer Support 00:09:41.855 ================================ 00:09:41.855 Supported: No 00:09:41.855 00:09:41.855 Persistent Memory Region Support 00:09:41.855 ================================ 00:09:41.855 Supported: No 00:09:41.855 00:09:41.855 Admin Command Set Attributes 00:09:41.855 ============================ 00:09:41.855 Security Send/Receive: Not Supported 00:09:41.855 Format NVM: Supported 00:09:41.855 Firmware Activate/Download: Not Supported 00:09:41.855 Namespace Management: Supported 00:09:41.855 Device Self-Test: Not Supported 00:09:41.855 Directives: Supported 00:09:41.855 NVMe-MI: Not Supported 00:09:41.855 Virtualization Management: Not Supported 00:09:41.855 Doorbell Buffer Config: Supported 00:09:41.855 Get LBA Status Capability: Not Supported 00:09:41.855 Command & Feature Lockdown Capability: Not Supported 00:09:41.855 Abort Command Limit: 4 00:09:41.855 Async Event Request Limit: 4 00:09:41.855 Number of Firmware Slots: N/A 00:09:41.855 Firmware Slot 1 Read-Only: N/A 00:09:41.855 Firmware Activation Without Reset: N/A 00:09:41.855 Multiple Update Detection Support: N/A 00:09:41.855 Firmware Update Granularity: No Information Provided 00:09:41.855 Per-Namespace SMART Log: Yes 00:09:41.855 Asymmetric Namespace Access Log Page: Not Supported 00:09:41.855 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:41.855 Command Effects Log Page: Supported 00:09:41.855 Get Log Page Extended Data: Supported 00:09:41.855 Telemetry Log Pages: Not Supported 00:09:41.855 Persistent Event Log Pages: Not Supported 00:09:41.855 Supported Log Pages Log Page: May Support 00:09:41.855 Commands Supported & Effects Log Page: Not Supported 00:09:41.855 Feature Identifiers & Effects Log Page:May Support 00:09:41.855 NVMe-MI Commands & Effects Log Page: May Support 00:09:41.855 Data Area 4 for Telemetry Log: Not Supported 00:09:41.855 Error Log Page Entries Supported: 1 00:09:41.855 Keep Alive: Not Supported 00:09:41.855 00:09:41.855 NVM Command Set Attributes 00:09:41.855 ========================== 00:09:41.855 Submission Queue Entry Size 00:09:41.855 Max: 64 00:09:41.855 Min: 64 00:09:41.855 Completion Queue Entry Size 00:09:41.855 Max: 16 00:09:41.855 Min: 16 00:09:41.855 Number of Namespaces: 256 00:09:41.855 Compare Command: Supported 00:09:41.855 Write Uncorrectable Command: Not Supported 00:09:41.855 Dataset Management Command: Supported 00:09:41.855 Write Zeroes Command: Supported 00:09:41.855 Set Features Save Field: Supported 00:09:41.855 Reservations: Not Supported 00:09:41.855 Timestamp: Supported 00:09:41.855 Copy: Supported 00:09:41.855 Volatile Write Cache: Present 00:09:41.855 Atomic Write Unit (Normal): 1 00:09:41.855 Atomic Write Unit (PFail): 1 00:09:41.855 Atomic Compare & Write Unit: 1 00:09:41.855 Fused Compare & Write: Not Supported 00:09:41.855 Scatter-Gather List 00:09:41.855 SGL Command Set: Supported 00:09:41.855 SGL Keyed: Not Supported 00:09:41.855 SGL Bit Bucket Descriptor: Not Supported 00:09:41.855 SGL Metadata Pointer: Not Supported 00:09:41.855 Oversized SGL: Not Supported 00:09:41.855 SGL Metadata Address: Not Supported 00:09:41.855 SGL Offset: Not Supported 00:09:41.855 Transport SGL Data Block: Not Supported 00:09:41.855 Replay Protected Memory Block: Not Supported 00:09:41.855 00:09:41.855 Firmware Slot Information 00:09:41.855 ========================= 00:09:41.855 Active slot: 1 00:09:41.855 Slot 1 Firmware Revision: 1.0 00:09:41.855 00:09:41.855 00:09:41.855 Commands Supported and Effects 00:09:41.856 ============================== 00:09:41.856 Admin Commands 00:09:41.856 -------------- 00:09:41.856 Delete I/O Submission Queue (00h): Supported 00:09:41.856 Create I/O Submission Queue (01h): Supported 00:09:41.856 Get Log Page (02h): Supported 00:09:41.856 Delete I/O Completion Queue (04h): Supported 00:09:41.856 Create I/O Completion Queue (05h): Supported 00:09:41.856 Identify (06h): Supported 00:09:41.856 Abort (08h): Supported 00:09:41.856 Set Features (09h): Supported 00:09:41.856 Get Features (0Ah): Supported 00:09:41.856 Asynchronous Event Request (0Ch): Supported 00:09:41.856 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:41.856 Directive Send (19h): Supported 00:09:41.856 Directive Receive (1Ah): Supported 00:09:41.856 Virtualization Management (1Ch): Supported 00:09:41.856 Doorbell Buffer Config (7Ch): Supported 00:09:41.856 Format NVM (80h): Supported LBA-Change 00:09:41.856 I/O Commands 00:09:41.856 ------------ 00:09:41.856 Flush (00h): Supported LBA-Change 00:09:41.856 Write (01h): Supported LBA-Change 00:09:41.856 Read (02h): Supported 00:09:41.856 Compare (05h): Supported 00:09:41.856 Write Zeroes (08h): Supported LBA-Change 00:09:41.856 Dataset Management (09h): Supported LBA-Change 00:09:41.856 Unknown (0Ch): Supported 00:09:41.856 Unknown (12h): Supported 00:09:41.856 Copy (19h): Supported LBA-Change 00:09:41.856 Unknown (1Dh): Supported LBA-Change 00:09:41.856 00:09:41.856 Error Log 00:09:41.856 ========= 00:09:41.856 00:09:41.856 Arbitration 00:09:41.856 =========== 00:09:41.856 Arbitration Burst: no limit 00:09:41.856 00:09:41.856 Power Management 00:09:41.856 ================ 00:09:41.856 Number of Power States: 1 00:09:41.856 Current Power State: Power State #0 00:09:41.856 Power State #0: 00:09:41.856 Max Power: 25.00 W 00:09:41.856 Non-Operational State: Operational 00:09:41.856 Entry Latency: 16 microseconds 00:09:41.856 Exit Latency: 4 microseconds 00:09:41.856 Relative Read Throughput: 0 00:09:41.856 Relative Read Latency: 0 00:09:41.856 Relative Write Throughput: 0 00:09:41.856 Relative Write Latency: 0 00:09:41.856 Idle Power: Not Reported 00:09:41.856 Active Power: Not Reported 00:09:41.856 Non-Operational Permissive Mode: Not Supported 00:09:41.856 00:09:41.856 Health Information 00:09:41.856 ================== 00:09:41.856 Critical Warnings: 00:09:41.856 Available Spare Space: OK 00:09:41.856 Temperature: OK 00:09:41.856 Device Reliability: OK 00:09:41.856 Read Only: No 00:09:41.856 Volatile Memory Backup: OK 00:09:41.856 Current Temperature: 323 Kelvin (50 Celsius) 00:09:41.856 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:41.856 Available Spare: 0% 00:09:41.856 Available Spare Threshold: 0% 00:09:41.856 Life Percentage Used: 0% 00:09:41.856 Data Units Read: 4240 00:09:41.856 Data Units Written: 1957 00:09:41.856 Host Read Commands: 190473 00:09:41.856 Host Write Commands: 93441 00:09:41.856 Controller Busy Time: 0 minutes 00:09:41.856 Power Cycles: 0 00:09:41.856 Power On Hours: 0 hours 00:09:41.856 Unsafe Shutdowns: 0 00:09:41.856 Unrecoverable Media Errors: 0 00:09:41.856 Lifetime Error Log Entries: 0 00:09:41.856 Warning Temperature Time: 0 minutes 00:09:41.856 Critical Temperature Time: 0 minutes 00:09:41.856 00:09:41.856 Number of Queues 00:09:41.856 ================ 00:09:41.856 Number of I/O Submission Queues: 64 00:09:41.856 Number of I/O Completion Queues: 64 00:09:41.856 00:09:41.856 ZNS Specific Controller Data 00:09:41.856 ============================ 00:09:41.856 Zone Append Size Limit: 0 00:09:41.856 00:09:41.856 00:09:41.856 Active Namespaces 00:09:41.856 ================= 00:09:41.856 Namespace ID:1 00:09:41.856 Error Recovery Timeout: Unlimited 00:09:41.856 Command Set Identifier: NVM (00h) 00:09:41.856 Deallocate: Supported 00:09:41.856 Deallocated/Unwritten Error: Supported 00:09:41.856 Deallocated Read Value: All 0x00 00:09:41.856 Deallocate in Write Zeroes: Not Supported 00:09:41.856 Deallocated Guard Field: 0xFFFF 00:09:41.856 Flush: Supported 00:09:41.856 Reservation: Not Supported 00:09:41.856 Namespace Sharing Capabilities: Private 00:09:41.856 Size (in LBAs): 1048576 (4GiB) 00:09:41.856 Capacity (in LBAs): 1048576 (4GiB) 00:09:41.856 Utilization (in LBAs): 1048576 (4GiB) 00:09:41.856 Thin Provisioning: Not Supported 00:09:41.856 Per-NS Atomic Units: No 00:09:41.856 Maximum Single Source Range Length: 128 00:09:41.856 Maximum Copy Length: 128 00:09:41.856 Maximum Source Range Count: 128 00:09:41.856 NGUID/EUI64 Never Reused: No 00:09:41.856 Namespace Write Protected: No 00:09:41.856 Number of LBA Formats: 8 00:09:41.856 Current LBA Format: LBA Format #04 00:09:41.856 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:41.856 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:41.856 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:41.856 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:41.856 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:41.856 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:41.856 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:41.856 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:41.856 00:09:41.856 Namespace ID:2 00:09:41.856 Error Recovery Timeout: Unlimited 00:09:41.856 Command Set Identifier: NVM (00h) 00:09:41.856 Deallocate: Supported 00:09:41.856 Deallocated/Unwritten Error: Supported 00:09:41.856 Deallocated Read Value: All 0x00 00:09:41.856 Deallocate in Write Zeroes: Not Supported 00:09:41.856 Deallocated Guard Field: 0xFFFF 00:09:41.856 Flush: Supported 00:09:41.856 Reservation: Not Supported 00:09:41.856 Namespace Sharing Capabilities: Private 00:09:41.856 Size (in LBAs): 1048576 (4GiB) 00:09:41.856 Capacity (in LBAs): 1048576 (4GiB) 00:09:41.856 Utilization (in LBAs): 1048576 (4GiB) 00:09:41.856 Thin Provisioning: Not Supported 00:09:41.856 Per-NS Atomic Units: No 00:09:41.856 Maximum Single Source Range Length: 128 00:09:41.856 Maximum Copy Length: 128 00:09:41.856 Maximum Source Range Count: 128 00:09:41.856 NGUID/EUI64 Never Reused: No 00:09:41.856 Namespace Write Protected: No 00:09:41.856 Number of LBA Formats: 8 00:09:41.856 Current LBA Format: LBA Format #04 00:09:41.856 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:41.856 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:41.856 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:41.856 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:41.856 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:41.856 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:41.856 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:41.856 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:41.856 00:09:41.856 Namespace ID:3 00:09:41.856 Error Recovery Timeout: Unlimited 00:09:41.856 Command Set Identifier: NVM (00h) 00:09:41.856 Deallocate: Supported 00:09:41.856 Deallocated/Unwritten Error: Supported 00:09:41.856 Deallocated Read Value: All 0x00 00:09:41.856 Deallocate in Write Zeroes: Not Supported 00:09:41.856 Deallocated Guard Field: 0xFFFF 00:09:41.856 Flush: Supported 00:09:41.856 Reservation: Not Supported 00:09:41.856 Namespace Sharing Capabilities: Private 00:09:41.856 Size (in LBAs): 1048576 (4GiB) 00:09:41.856 Capacity (in LBAs): 1048576 (4GiB) 00:09:41.856 Utilization (in LBAs): 1048576 (4GiB) 00:09:41.856 Thin Provisioning: Not Supported 00:09:41.856 Per-NS Atomic Units: No 00:09:41.856 Maximum Single Source Range Length: 128 00:09:41.856 Maximum Copy Length: 128 00:09:41.856 Maximum Source Range Count: 128 00:09:41.856 NGUID/EUI64 Never Reused: No 00:09:41.856 Namespace Write Protected: No 00:09:41.856 Number of LBA Formats: 8 00:09:41.856 Current LBA Format: LBA Format #04 00:09:41.856 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:41.856 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:41.856 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:41.856 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:41.856 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:41.856 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:41.856 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:41.856 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:41.856 00:09:41.856 14:04:44 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:41.856 14:04:44 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:09:42.118 ===================================================== 00:09:42.118 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:42.118 ===================================================== 00:09:42.118 Controller Capabilities/Features 00:09:42.118 ================================ 00:09:42.118 Vendor ID: 1b36 00:09:42.118 Subsystem Vendor ID: 1af4 00:09:42.118 Serial Number: 12340 00:09:42.118 Model Number: QEMU NVMe Ctrl 00:09:42.118 Firmware Version: 8.0.0 00:09:42.118 Recommended Arb Burst: 6 00:09:42.118 IEEE OUI Identifier: 00 54 52 00:09:42.118 Multi-path I/O 00:09:42.118 May have multiple subsystem ports: No 00:09:42.118 May have multiple controllers: No 00:09:42.118 Associated with SR-IOV VF: No 00:09:42.118 Max Data Transfer Size: 524288 00:09:42.118 Max Number of Namespaces: 256 00:09:42.118 Max Number of I/O Queues: 64 00:09:42.118 NVMe Specification Version (VS): 1.4 00:09:42.118 NVMe Specification Version (Identify): 1.4 00:09:42.118 Maximum Queue Entries: 2048 00:09:42.118 Contiguous Queues Required: Yes 00:09:42.118 Arbitration Mechanisms Supported 00:09:42.118 Weighted Round Robin: Not Supported 00:09:42.118 Vendor Specific: Not Supported 00:09:42.118 Reset Timeout: 7500 ms 00:09:42.118 Doorbell Stride: 4 bytes 00:09:42.118 NVM Subsystem Reset: Not Supported 00:09:42.118 Command Sets Supported 00:09:42.118 NVM Command Set: Supported 00:09:42.118 Boot Partition: Not Supported 00:09:42.118 Memory Page Size Minimum: 4096 bytes 00:09:42.118 Memory Page Size Maximum: 65536 bytes 00:09:42.118 Persistent Memory Region: Not Supported 00:09:42.118 Optional Asynchronous Events Supported 00:09:42.118 Namespace Attribute Notices: Supported 00:09:42.118 Firmware Activation Notices: Not Supported 00:09:42.118 ANA Change Notices: Not Supported 00:09:42.118 PLE Aggregate Log Change Notices: Not Supported 00:09:42.118 LBA Status Info Alert Notices: Not Supported 00:09:42.118 EGE Aggregate Log Change Notices: Not Supported 00:09:42.118 Normal NVM Subsystem Shutdown event: Not Supported 00:09:42.118 Zone Descriptor Change Notices: Not Supported 00:09:42.118 Discovery Log Change Notices: Not Supported 00:09:42.118 Controller Attributes 00:09:42.118 128-bit Host Identifier: Not Supported 00:09:42.118 Non-Operational Permissive Mode: Not Supported 00:09:42.118 NVM Sets: Not Supported 00:09:42.118 Read Recovery Levels: Not Supported 00:09:42.118 Endurance Groups: Not Supported 00:09:42.118 Predictable Latency Mode: Not Supported 00:09:42.118 Traffic Based Keep ALive: Not Supported 00:09:42.118 Namespace Granularity: Not Supported 00:09:42.118 SQ Associations: Not Supported 00:09:42.118 UUID List: Not Supported 00:09:42.118 Multi-Domain Subsystem: Not Supported 00:09:42.118 Fixed Capacity Management: Not Supported 00:09:42.119 Variable Capacity Management: Not Supported 00:09:42.119 Delete Endurance Group: Not Supported 00:09:42.119 Delete NVM Set: Not Supported 00:09:42.119 Extended LBA Formats Supported: Supported 00:09:42.119 Flexible Data Placement Supported: Not Supported 00:09:42.119 00:09:42.119 Controller Memory Buffer Support 00:09:42.119 ================================ 00:09:42.119 Supported: No 00:09:42.119 00:09:42.119 Persistent Memory Region Support 00:09:42.119 ================================ 00:09:42.119 Supported: No 00:09:42.119 00:09:42.119 Admin Command Set Attributes 00:09:42.119 ============================ 00:09:42.119 Security Send/Receive: Not Supported 00:09:42.119 Format NVM: Supported 00:09:42.119 Firmware Activate/Download: Not Supported 00:09:42.119 Namespace Management: Supported 00:09:42.119 Device Self-Test: Not Supported 00:09:42.119 Directives: Supported 00:09:42.119 NVMe-MI: Not Supported 00:09:42.119 Virtualization Management: Not Supported 00:09:42.119 Doorbell Buffer Config: Supported 00:09:42.119 Get LBA Status Capability: Not Supported 00:09:42.119 Command & Feature Lockdown Capability: Not Supported 00:09:42.119 Abort Command Limit: 4 00:09:42.119 Async Event Request Limit: 4 00:09:42.119 Number of Firmware Slots: N/A 00:09:42.119 Firmware Slot 1 Read-Only: N/A 00:09:42.119 Firmware Activation Without Reset: N/A 00:09:42.119 Multiple Update Detection Support: N/A 00:09:42.119 Firmware Update Granularity: No Information Provided 00:09:42.119 Per-Namespace SMART Log: Yes 00:09:42.119 Asymmetric Namespace Access Log Page: Not Supported 00:09:42.119 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:42.119 Command Effects Log Page: Supported 00:09:42.119 Get Log Page Extended Data: Supported 00:09:42.119 Telemetry Log Pages: Not Supported 00:09:42.119 Persistent Event Log Pages: Not Supported 00:09:42.119 Supported Log Pages Log Page: May Support 00:09:42.119 Commands Supported & Effects Log Page: Not Supported 00:09:42.119 Feature Identifiers & Effects Log Page:May Support 00:09:42.119 NVMe-MI Commands & Effects Log Page: May Support 00:09:42.119 Data Area 4 for Telemetry Log: Not Supported 00:09:42.119 Error Log Page Entries Supported: 1 00:09:42.119 Keep Alive: Not Supported 00:09:42.119 00:09:42.119 NVM Command Set Attributes 00:09:42.119 ========================== 00:09:42.119 Submission Queue Entry Size 00:09:42.119 Max: 64 00:09:42.119 Min: 64 00:09:42.119 Completion Queue Entry Size 00:09:42.119 Max: 16 00:09:42.119 Min: 16 00:09:42.119 Number of Namespaces: 256 00:09:42.119 Compare Command: Supported 00:09:42.119 Write Uncorrectable Command: Not Supported 00:09:42.119 Dataset Management Command: Supported 00:09:42.119 Write Zeroes Command: Supported 00:09:42.119 Set Features Save Field: Supported 00:09:42.119 Reservations: Not Supported 00:09:42.119 Timestamp: Supported 00:09:42.119 Copy: Supported 00:09:42.119 Volatile Write Cache: Present 00:09:42.119 Atomic Write Unit (Normal): 1 00:09:42.119 Atomic Write Unit (PFail): 1 00:09:42.119 Atomic Compare & Write Unit: 1 00:09:42.119 Fused Compare & Write: Not Supported 00:09:42.119 Scatter-Gather List 00:09:42.119 SGL Command Set: Supported 00:09:42.119 SGL Keyed: Not Supported 00:09:42.119 SGL Bit Bucket Descriptor: Not Supported 00:09:42.119 SGL Metadata Pointer: Not Supported 00:09:42.119 Oversized SGL: Not Supported 00:09:42.119 SGL Metadata Address: Not Supported 00:09:42.119 SGL Offset: Not Supported 00:09:42.119 Transport SGL Data Block: Not Supported 00:09:42.119 Replay Protected Memory Block: Not Supported 00:09:42.119 00:09:42.119 Firmware Slot Information 00:09:42.119 ========================= 00:09:42.119 Active slot: 1 00:09:42.119 Slot 1 Firmware Revision: 1.0 00:09:42.119 00:09:42.119 00:09:42.119 Commands Supported and Effects 00:09:42.119 ============================== 00:09:42.119 Admin Commands 00:09:42.119 -------------- 00:09:42.119 Delete I/O Submission Queue (00h): Supported 00:09:42.119 Create I/O Submission Queue (01h): Supported 00:09:42.119 Get Log Page (02h): Supported 00:09:42.119 Delete I/O Completion Queue (04h): Supported 00:09:42.119 Create I/O Completion Queue (05h): Supported 00:09:42.119 Identify (06h): Supported 00:09:42.119 Abort (08h): Supported 00:09:42.119 Set Features (09h): Supported 00:09:42.119 Get Features (0Ah): Supported 00:09:42.119 Asynchronous Event Request (0Ch): Supported 00:09:42.119 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:42.119 Directive Send (19h): Supported 00:09:42.119 Directive Receive (1Ah): Supported 00:09:42.119 Virtualization Management (1Ch): Supported 00:09:42.119 Doorbell Buffer Config (7Ch): Supported 00:09:42.119 Format NVM (80h): Supported LBA-Change 00:09:42.119 I/O Commands 00:09:42.119 ------------ 00:09:42.119 Flush (00h): Supported LBA-Change 00:09:42.119 Write (01h): Supported LBA-Change 00:09:42.119 Read (02h): Supported 00:09:42.119 Compare (05h): Supported 00:09:42.119 Write Zeroes (08h): Supported LBA-Change 00:09:42.119 Dataset Management (09h): Supported LBA-Change 00:09:42.119 Unknown (0Ch): Supported 00:09:42.119 Unknown (12h): Supported 00:09:42.119 Copy (19h): Supported LBA-Change 00:09:42.119 Unknown (1Dh): Supported LBA-Change 00:09:42.119 00:09:42.119 Error Log 00:09:42.119 ========= 00:09:42.119 00:09:42.119 Arbitration 00:09:42.119 =========== 00:09:42.119 Arbitration Burst: no limit 00:09:42.119 00:09:42.119 Power Management 00:09:42.119 ================ 00:09:42.119 Number of Power States: 1 00:09:42.119 Current Power State: Power State #0 00:09:42.119 Power State #0: 00:09:42.119 Max Power: 25.00 W 00:09:42.119 Non-Operational State: Operational 00:09:42.119 Entry Latency: 16 microseconds 00:09:42.119 Exit Latency: 4 microseconds 00:09:42.119 Relative Read Throughput: 0 00:09:42.119 Relative Read Latency: 0 00:09:42.119 Relative Write Throughput: 0 00:09:42.119 Relative Write Latency: 0 00:09:42.119 Idle Power: Not Reported 00:09:42.119 Active Power: Not Reported 00:09:42.119 Non-Operational Permissive Mode: Not Supported 00:09:42.119 00:09:42.119 Health Information 00:09:42.119 ================== 00:09:42.119 Critical Warnings: 00:09:42.119 Available Spare Space: OK 00:09:42.119 Temperature: OK 00:09:42.119 Device Reliability: OK 00:09:42.119 Read Only: No 00:09:42.119 Volatile Memory Backup: OK 00:09:42.119 Current Temperature: 323 Kelvin (50 Celsius) 00:09:42.119 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:42.119 Available Spare: 0% 00:09:42.119 Available Spare Threshold: 0% 00:09:42.119 Life Percentage Used: 0% 00:09:42.119 Data Units Read: 2043 00:09:42.119 Data Units Written: 945 00:09:42.119 Host Read Commands: 97740 00:09:42.119 Host Write Commands: 48623 00:09:42.119 Controller Busy Time: 0 minutes 00:09:42.119 Power Cycles: 0 00:09:42.119 Power On Hours: 0 hours 00:09:42.119 Unsafe Shutdowns: 0 00:09:42.119 Unrecoverable Media Errors: 0 00:09:42.119 Lifetime Error Log Entries: 0 00:09:42.119 Warning Temperature Time: 0 minutes 00:09:42.119 Critical Temperature Time: 0 minutes 00:09:42.119 00:09:42.119 Number of Queues 00:09:42.119 ================ 00:09:42.119 Number of I/O Submission Queues: 64 00:09:42.119 Number of I/O Completion Queues: 64 00:09:42.119 00:09:42.119 ZNS Specific Controller Data 00:09:42.119 ============================ 00:09:42.119 Zone Append Size Limit: 0 00:09:42.119 00:09:42.119 00:09:42.119 Active Namespaces 00:09:42.119 ================= 00:09:42.119 Namespace ID:1 00:09:42.119 Error Recovery Timeout: Unlimited 00:09:42.119 Command Set Identifier: NVM (00h) 00:09:42.119 Deallocate: Supported 00:09:42.119 Deallocated/Unwritten Error: Supported 00:09:42.119 Deallocated Read Value: All 0x00 00:09:42.119 Deallocate in Write Zeroes: Not Supported 00:09:42.119 Deallocated Guard Field: 0xFFFF 00:09:42.119 Flush: Supported 00:09:42.119 Reservation: Not Supported 00:09:42.119 Metadata Transferred as: Separate Metadata Buffer 00:09:42.119 Namespace Sharing Capabilities: Private 00:09:42.119 Size (in LBAs): 1548666 (5GiB) 00:09:42.119 Capacity (in LBAs): 1548666 (5GiB) 00:09:42.119 Utilization (in LBAs): 1548666 (5GiB) 00:09:42.119 Thin Provisioning: Not Supported 00:09:42.119 Per-NS Atomic Units: No 00:09:42.119 Maximum Single Source Range Length: 128 00:09:42.119 Maximum Copy Length: 128 00:09:42.119 Maximum Source Range Count: 128 00:09:42.119 NGUID/EUI64 Never Reused: No 00:09:42.119 Namespace Write Protected: No 00:09:42.119 Number of LBA Formats: 8 00:09:42.119 Current LBA Format: LBA Format #07 00:09:42.119 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:42.119 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:42.119 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:42.119 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:42.119 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:42.119 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:42.119 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:42.119 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:42.119 00:09:42.119 14:04:44 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:42.119 14:04:44 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' -i 0 00:09:42.379 ===================================================== 00:09:42.379 NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:42.379 ===================================================== 00:09:42.379 Controller Capabilities/Features 00:09:42.379 ================================ 00:09:42.379 Vendor ID: 1b36 00:09:42.379 Subsystem Vendor ID: 1af4 00:09:42.379 Serial Number: 12341 00:09:42.379 Model Number: QEMU NVMe Ctrl 00:09:42.379 Firmware Version: 8.0.0 00:09:42.379 Recommended Arb Burst: 6 00:09:42.379 IEEE OUI Identifier: 00 54 52 00:09:42.379 Multi-path I/O 00:09:42.379 May have multiple subsystem ports: No 00:09:42.379 May have multiple controllers: No 00:09:42.379 Associated with SR-IOV VF: No 00:09:42.379 Max Data Transfer Size: 524288 00:09:42.379 Max Number of Namespaces: 256 00:09:42.379 Max Number of I/O Queues: 64 00:09:42.379 NVMe Specification Version (VS): 1.4 00:09:42.379 NVMe Specification Version (Identify): 1.4 00:09:42.379 Maximum Queue Entries: 2048 00:09:42.379 Contiguous Queues Required: Yes 00:09:42.379 Arbitration Mechanisms Supported 00:09:42.379 Weighted Round Robin: Not Supported 00:09:42.379 Vendor Specific: Not Supported 00:09:42.379 Reset Timeout: 7500 ms 00:09:42.379 Doorbell Stride: 4 bytes 00:09:42.379 NVM Subsystem Reset: Not Supported 00:09:42.379 Command Sets Supported 00:09:42.379 NVM Command Set: Supported 00:09:42.379 Boot Partition: Not Supported 00:09:42.379 Memory Page Size Minimum: 4096 bytes 00:09:42.379 Memory Page Size Maximum: 65536 bytes 00:09:42.379 Persistent Memory Region: Not Supported 00:09:42.379 Optional Asynchronous Events Supported 00:09:42.379 Namespace Attribute Notices: Supported 00:09:42.379 Firmware Activation Notices: Not Supported 00:09:42.379 ANA Change Notices: Not Supported 00:09:42.379 PLE Aggregate Log Change Notices: Not Supported 00:09:42.379 LBA Status Info Alert Notices: Not Supported 00:09:42.379 EGE Aggregate Log Change Notices: Not Supported 00:09:42.379 Normal NVM Subsystem Shutdown event: Not Supported 00:09:42.379 Zone Descriptor Change Notices: Not Supported 00:09:42.379 Discovery Log Change Notices: Not Supported 00:09:42.379 Controller Attributes 00:09:42.379 128-bit Host Identifier: Not Supported 00:09:42.379 Non-Operational Permissive Mode: Not Supported 00:09:42.379 NVM Sets: Not Supported 00:09:42.379 Read Recovery Levels: Not Supported 00:09:42.379 Endurance Groups: Not Supported 00:09:42.379 Predictable Latency Mode: Not Supported 00:09:42.379 Traffic Based Keep ALive: Not Supported 00:09:42.379 Namespace Granularity: Not Supported 00:09:42.379 SQ Associations: Not Supported 00:09:42.379 UUID List: Not Supported 00:09:42.379 Multi-Domain Subsystem: Not Supported 00:09:42.379 Fixed Capacity Management: Not Supported 00:09:42.379 Variable Capacity Management: Not Supported 00:09:42.379 Delete Endurance Group: Not Supported 00:09:42.379 Delete NVM Set: Not Supported 00:09:42.379 Extended LBA Formats Supported: Supported 00:09:42.379 Flexible Data Placement Supported: Not Supported 00:09:42.379 00:09:42.379 Controller Memory Buffer Support 00:09:42.379 ================================ 00:09:42.379 Supported: No 00:09:42.379 00:09:42.379 Persistent Memory Region Support 00:09:42.379 ================================ 00:09:42.379 Supported: No 00:09:42.379 00:09:42.379 Admin Command Set Attributes 00:09:42.379 ============================ 00:09:42.379 Security Send/Receive: Not Supported 00:09:42.379 Format NVM: Supported 00:09:42.379 Firmware Activate/Download: Not Supported 00:09:42.379 Namespace Management: Supported 00:09:42.379 Device Self-Test: Not Supported 00:09:42.379 Directives: Supported 00:09:42.379 NVMe-MI: Not Supported 00:09:42.379 Virtualization Management: Not Supported 00:09:42.379 Doorbell Buffer Config: Supported 00:09:42.379 Get LBA Status Capability: Not Supported 00:09:42.379 Command & Feature Lockdown Capability: Not Supported 00:09:42.379 Abort Command Limit: 4 00:09:42.379 Async Event Request Limit: 4 00:09:42.379 Number of Firmware Slots: N/A 00:09:42.379 Firmware Slot 1 Read-Only: N/A 00:09:42.379 Firmware Activation Without Reset: N/A 00:09:42.379 Multiple Update Detection Support: N/A 00:09:42.379 Firmware Update Granularity: No Information Provided 00:09:42.379 Per-Namespace SMART Log: Yes 00:09:42.379 Asymmetric Namespace Access Log Page: Not Supported 00:09:42.379 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:42.379 Command Effects Log Page: Supported 00:09:42.379 Get Log Page Extended Data: Supported 00:09:42.379 Telemetry Log Pages: Not Supported 00:09:42.379 Persistent Event Log Pages: Not Supported 00:09:42.379 Supported Log Pages Log Page: May Support 00:09:42.379 Commands Supported & Effects Log Page: Not Supported 00:09:42.379 Feature Identifiers & Effects Log Page:May Support 00:09:42.379 NVMe-MI Commands & Effects Log Page: May Support 00:09:42.379 Data Area 4 for Telemetry Log: Not Supported 00:09:42.379 Error Log Page Entries Supported: 1 00:09:42.379 Keep Alive: Not Supported 00:09:42.379 00:09:42.379 NVM Command Set Attributes 00:09:42.379 ========================== 00:09:42.379 Submission Queue Entry Size 00:09:42.379 Max: 64 00:09:42.379 Min: 64 00:09:42.379 Completion Queue Entry Size 00:09:42.379 Max: 16 00:09:42.379 Min: 16 00:09:42.379 Number of Namespaces: 256 00:09:42.379 Compare Command: Supported 00:09:42.379 Write Uncorrectable Command: Not Supported 00:09:42.379 Dataset Management Command: Supported 00:09:42.379 Write Zeroes Command: Supported 00:09:42.379 Set Features Save Field: Supported 00:09:42.379 Reservations: Not Supported 00:09:42.379 Timestamp: Supported 00:09:42.379 Copy: Supported 00:09:42.379 Volatile Write Cache: Present 00:09:42.379 Atomic Write Unit (Normal): 1 00:09:42.379 Atomic Write Unit (PFail): 1 00:09:42.379 Atomic Compare & Write Unit: 1 00:09:42.379 Fused Compare & Write: Not Supported 00:09:42.379 Scatter-Gather List 00:09:42.379 SGL Command Set: Supported 00:09:42.379 SGL Keyed: Not Supported 00:09:42.379 SGL Bit Bucket Descriptor: Not Supported 00:09:42.379 SGL Metadata Pointer: Not Supported 00:09:42.379 Oversized SGL: Not Supported 00:09:42.379 SGL Metadata Address: Not Supported 00:09:42.379 SGL Offset: Not Supported 00:09:42.379 Transport SGL Data Block: Not Supported 00:09:42.379 Replay Protected Memory Block: Not Supported 00:09:42.379 00:09:42.379 Firmware Slot Information 00:09:42.379 ========================= 00:09:42.379 Active slot: 1 00:09:42.379 Slot 1 Firmware Revision: 1.0 00:09:42.379 00:09:42.379 00:09:42.379 Commands Supported and Effects 00:09:42.379 ============================== 00:09:42.379 Admin Commands 00:09:42.379 -------------- 00:09:42.379 Delete I/O Submission Queue (00h): Supported 00:09:42.379 Create I/O Submission Queue (01h): Supported 00:09:42.379 Get Log Page (02h): Supported 00:09:42.379 Delete I/O Completion Queue (04h): Supported 00:09:42.379 Create I/O Completion Queue (05h): Supported 00:09:42.379 Identify (06h): Supported 00:09:42.379 Abort (08h): Supported 00:09:42.379 Set Features (09h): Supported 00:09:42.379 Get Features (0Ah): Supported 00:09:42.379 Asynchronous Event Request (0Ch): Supported 00:09:42.379 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:42.379 Directive Send (19h): Supported 00:09:42.379 Directive Receive (1Ah): Supported 00:09:42.379 Virtualization Management (1Ch): Supported 00:09:42.379 Doorbell Buffer Config (7Ch): Supported 00:09:42.379 Format NVM (80h): Supported LBA-Change 00:09:42.379 I/O Commands 00:09:42.379 ------------ 00:09:42.379 Flush (00h): Supported LBA-Change 00:09:42.379 Write (01h): Supported LBA-Change 00:09:42.379 Read (02h): Supported 00:09:42.379 Compare (05h): Supported 00:09:42.379 Write Zeroes (08h): Supported LBA-Change 00:09:42.379 Dataset Management (09h): Supported LBA-Change 00:09:42.379 Unknown (0Ch): Supported 00:09:42.379 Unknown (12h): Supported 00:09:42.379 Copy (19h): Supported LBA-Change 00:09:42.379 Unknown (1Dh): Supported LBA-Change 00:09:42.379 00:09:42.379 Error Log 00:09:42.379 ========= 00:09:42.379 00:09:42.379 Arbitration 00:09:42.379 =========== 00:09:42.379 Arbitration Burst: no limit 00:09:42.379 00:09:42.379 Power Management 00:09:42.379 ================ 00:09:42.379 Number of Power States: 1 00:09:42.379 Current Power State: Power State #0 00:09:42.379 Power State #0: 00:09:42.379 Max Power: 25.00 W 00:09:42.379 Non-Operational State: Operational 00:09:42.379 Entry Latency: 16 microseconds 00:09:42.379 Exit Latency: 4 microseconds 00:09:42.379 Relative Read Throughput: 0 00:09:42.379 Relative Read Latency: 0 00:09:42.379 Relative Write Throughput: 0 00:09:42.379 Relative Write Latency: 0 00:09:42.379 Idle Power: Not Reported 00:09:42.379 Active Power: Not Reported 00:09:42.379 Non-Operational Permissive Mode: Not Supported 00:09:42.379 00:09:42.379 Health Information 00:09:42.379 ================== 00:09:42.379 Critical Warnings: 00:09:42.379 Available Spare Space: OK 00:09:42.379 Temperature: OK 00:09:42.379 Device Reliability: OK 00:09:42.379 Read Only: No 00:09:42.379 Volatile Memory Backup: OK 00:09:42.379 Current Temperature: 323 Kelvin (50 Celsius) 00:09:42.379 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:42.379 Available Spare: 0% 00:09:42.379 Available Spare Threshold: 0% 00:09:42.379 Life Percentage Used: 0% 00:09:42.379 Data Units Read: 1367 00:09:42.379 Data Units Written: 632 00:09:42.379 Host Read Commands: 62982 00:09:42.379 Host Write Commands: 30963 00:09:42.379 Controller Busy Time: 0 minutes 00:09:42.379 Power Cycles: 0 00:09:42.379 Power On Hours: 0 hours 00:09:42.379 Unsafe Shutdowns: 0 00:09:42.379 Unrecoverable Media Errors: 0 00:09:42.379 Lifetime Error Log Entries: 0 00:09:42.380 Warning Temperature Time: 0 minutes 00:09:42.380 Critical Temperature Time: 0 minutes 00:09:42.380 00:09:42.380 Number of Queues 00:09:42.380 ================ 00:09:42.380 Number of I/O Submission Queues: 64 00:09:42.380 Number of I/O Completion Queues: 64 00:09:42.380 00:09:42.380 ZNS Specific Controller Data 00:09:42.380 ============================ 00:09:42.380 Zone Append Size Limit: 0 00:09:42.380 00:09:42.380 00:09:42.380 Active Namespaces 00:09:42.380 ================= 00:09:42.380 Namespace ID:1 00:09:42.380 Error Recovery Timeout: Unlimited 00:09:42.380 Command Set Identifier: NVM (00h) 00:09:42.380 Deallocate: Supported 00:09:42.380 Deallocated/Unwritten Error: Supported 00:09:42.380 Deallocated Read Value: All 0x00 00:09:42.380 Deallocate in Write Zeroes: Not Supported 00:09:42.380 Deallocated Guard Field: 0xFFFF 00:09:42.380 Flush: Supported 00:09:42.380 Reservation: Not Supported 00:09:42.380 Namespace Sharing Capabilities: Private 00:09:42.380 Size (in LBAs): 1310720 (5GiB) 00:09:42.380 Capacity (in LBAs): 1310720 (5GiB) 00:09:42.380 Utilization (in LBAs): 1310720 (5GiB) 00:09:42.380 Thin Provisioning: Not Supported 00:09:42.380 Per-NS Atomic Units: No 00:09:42.380 Maximum Single Source Range Length: 128 00:09:42.380 Maximum Copy Length: 128 00:09:42.380 Maximum Source Range Count: 128 00:09:42.380 NGUID/EUI64 Never Reused: No 00:09:42.380 Namespace Write Protected: No 00:09:42.380 Number of LBA Formats: 8 00:09:42.380 Current LBA Format: LBA Format #04 00:09:42.380 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:42.380 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:42.380 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:42.380 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:42.380 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:42.380 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:42.380 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:42.380 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:42.380 00:09:42.380 14:04:45 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:42.380 14:04:45 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' -i 0 00:09:42.641 ===================================================== 00:09:42.641 NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:42.641 ===================================================== 00:09:42.641 Controller Capabilities/Features 00:09:42.641 ================================ 00:09:42.641 Vendor ID: 1b36 00:09:42.641 Subsystem Vendor ID: 1af4 00:09:42.641 Serial Number: 12342 00:09:42.641 Model Number: QEMU NVMe Ctrl 00:09:42.641 Firmware Version: 8.0.0 00:09:42.641 Recommended Arb Burst: 6 00:09:42.641 IEEE OUI Identifier: 00 54 52 00:09:42.641 Multi-path I/O 00:09:42.641 May have multiple subsystem ports: No 00:09:42.641 May have multiple controllers: No 00:09:42.641 Associated with SR-IOV VF: No 00:09:42.641 Max Data Transfer Size: 524288 00:09:42.641 Max Number of Namespaces: 256 00:09:42.641 Max Number of I/O Queues: 64 00:09:42.641 NVMe Specification Version (VS): 1.4 00:09:42.641 NVMe Specification Version (Identify): 1.4 00:09:42.641 Maximum Queue Entries: 2048 00:09:42.641 Contiguous Queues Required: Yes 00:09:42.641 Arbitration Mechanisms Supported 00:09:42.641 Weighted Round Robin: Not Supported 00:09:42.641 Vendor Specific: Not Supported 00:09:42.641 Reset Timeout: 7500 ms 00:09:42.641 Doorbell Stride: 4 bytes 00:09:42.641 NVM Subsystem Reset: Not Supported 00:09:42.641 Command Sets Supported 00:09:42.641 NVM Command Set: Supported 00:09:42.641 Boot Partition: Not Supported 00:09:42.641 Memory Page Size Minimum: 4096 bytes 00:09:42.641 Memory Page Size Maximum: 65536 bytes 00:09:42.641 Persistent Memory Region: Not Supported 00:09:42.641 Optional Asynchronous Events Supported 00:09:42.641 Namespace Attribute Notices: Supported 00:09:42.641 Firmware Activation Notices: Not Supported 00:09:42.641 ANA Change Notices: Not Supported 00:09:42.641 PLE Aggregate Log Change Notices: Not Supported 00:09:42.641 LBA Status Info Alert Notices: Not Supported 00:09:42.641 EGE Aggregate Log Change Notices: Not Supported 00:09:42.641 Normal NVM Subsystem Shutdown event: Not Supported 00:09:42.641 Zone Descriptor Change Notices: Not Supported 00:09:42.641 Discovery Log Change Notices: Not Supported 00:09:42.641 Controller Attributes 00:09:42.641 128-bit Host Identifier: Not Supported 00:09:42.641 Non-Operational Permissive Mode: Not Supported 00:09:42.641 NVM Sets: Not Supported 00:09:42.641 Read Recovery Levels: Not Supported 00:09:42.641 Endurance Groups: Not Supported 00:09:42.641 Predictable Latency Mode: Not Supported 00:09:42.641 Traffic Based Keep ALive: Not Supported 00:09:42.641 Namespace Granularity: Not Supported 00:09:42.641 SQ Associations: Not Supported 00:09:42.641 UUID List: Not Supported 00:09:42.641 Multi-Domain Subsystem: Not Supported 00:09:42.641 Fixed Capacity Management: Not Supported 00:09:42.641 Variable Capacity Management: Not Supported 00:09:42.641 Delete Endurance Group: Not Supported 00:09:42.641 Delete NVM Set: Not Supported 00:09:42.641 Extended LBA Formats Supported: Supported 00:09:42.641 Flexible Data Placement Supported: Not Supported 00:09:42.641 00:09:42.641 Controller Memory Buffer Support 00:09:42.641 ================================ 00:09:42.641 Supported: No 00:09:42.641 00:09:42.641 Persistent Memory Region Support 00:09:42.641 ================================ 00:09:42.641 Supported: No 00:09:42.641 00:09:42.641 Admin Command Set Attributes 00:09:42.641 ============================ 00:09:42.641 Security Send/Receive: Not Supported 00:09:42.641 Format NVM: Supported 00:09:42.641 Firmware Activate/Download: Not Supported 00:09:42.641 Namespace Management: Supported 00:09:42.641 Device Self-Test: Not Supported 00:09:42.641 Directives: Supported 00:09:42.641 NVMe-MI: Not Supported 00:09:42.641 Virtualization Management: Not Supported 00:09:42.642 Doorbell Buffer Config: Supported 00:09:42.642 Get LBA Status Capability: Not Supported 00:09:42.642 Command & Feature Lockdown Capability: Not Supported 00:09:42.642 Abort Command Limit: 4 00:09:42.642 Async Event Request Limit: 4 00:09:42.642 Number of Firmware Slots: N/A 00:09:42.642 Firmware Slot 1 Read-Only: N/A 00:09:42.642 Firmware Activation Without Reset: N/A 00:09:42.642 Multiple Update Detection Support: N/A 00:09:42.642 Firmware Update Granularity: No Information Provided 00:09:42.642 Per-Namespace SMART Log: Yes 00:09:42.642 Asymmetric Namespace Access Log Page: Not Supported 00:09:42.642 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:42.642 Command Effects Log Page: Supported 00:09:42.642 Get Log Page Extended Data: Supported 00:09:42.642 Telemetry Log Pages: Not Supported 00:09:42.642 Persistent Event Log Pages: Not Supported 00:09:42.642 Supported Log Pages Log Page: May Support 00:09:42.642 Commands Supported & Effects Log Page: Not Supported 00:09:42.642 Feature Identifiers & Effects Log Page:May Support 00:09:42.642 NVMe-MI Commands & Effects Log Page: May Support 00:09:42.642 Data Area 4 for Telemetry Log: Not Supported 00:09:42.642 Error Log Page Entries Supported: 1 00:09:42.642 Keep Alive: Not Supported 00:09:42.642 00:09:42.642 NVM Command Set Attributes 00:09:42.642 ========================== 00:09:42.642 Submission Queue Entry Size 00:09:42.642 Max: 64 00:09:42.642 Min: 64 00:09:42.642 Completion Queue Entry Size 00:09:42.642 Max: 16 00:09:42.642 Min: 16 00:09:42.642 Number of Namespaces: 256 00:09:42.642 Compare Command: Supported 00:09:42.642 Write Uncorrectable Command: Not Supported 00:09:42.642 Dataset Management Command: Supported 00:09:42.642 Write Zeroes Command: Supported 00:09:42.642 Set Features Save Field: Supported 00:09:42.642 Reservations: Not Supported 00:09:42.642 Timestamp: Supported 00:09:42.642 Copy: Supported 00:09:42.642 Volatile Write Cache: Present 00:09:42.642 Atomic Write Unit (Normal): 1 00:09:42.642 Atomic Write Unit (PFail): 1 00:09:42.642 Atomic Compare & Write Unit: 1 00:09:42.642 Fused Compare & Write: Not Supported 00:09:42.642 Scatter-Gather List 00:09:42.642 SGL Command Set: Supported 00:09:42.642 SGL Keyed: Not Supported 00:09:42.642 SGL Bit Bucket Descriptor: Not Supported 00:09:42.642 SGL Metadata Pointer: Not Supported 00:09:42.642 Oversized SGL: Not Supported 00:09:42.642 SGL Metadata Address: Not Supported 00:09:42.642 SGL Offset: Not Supported 00:09:42.642 Transport SGL Data Block: Not Supported 00:09:42.642 Replay Protected Memory Block: Not Supported 00:09:42.642 00:09:42.642 Firmware Slot Information 00:09:42.642 ========================= 00:09:42.642 Active slot: 1 00:09:42.642 Slot 1 Firmware Revision: 1.0 00:09:42.642 00:09:42.642 00:09:42.642 Commands Supported and Effects 00:09:42.642 ============================== 00:09:42.642 Admin Commands 00:09:42.642 -------------- 00:09:42.642 Delete I/O Submission Queue (00h): Supported 00:09:42.642 Create I/O Submission Queue (01h): Supported 00:09:42.642 Get Log Page (02h): Supported 00:09:42.642 Delete I/O Completion Queue (04h): Supported 00:09:42.642 Create I/O Completion Queue (05h): Supported 00:09:42.642 Identify (06h): Supported 00:09:42.642 Abort (08h): Supported 00:09:42.642 Set Features (09h): Supported 00:09:42.642 Get Features (0Ah): Supported 00:09:42.642 Asynchronous Event Request (0Ch): Supported 00:09:42.642 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:42.642 Directive Send (19h): Supported 00:09:42.642 Directive Receive (1Ah): Supported 00:09:42.642 Virtualization Management (1Ch): Supported 00:09:42.642 Doorbell Buffer Config (7Ch): Supported 00:09:42.642 Format NVM (80h): Supported LBA-Change 00:09:42.642 I/O Commands 00:09:42.642 ------------ 00:09:42.642 Flush (00h): Supported LBA-Change 00:09:42.642 Write (01h): Supported LBA-Change 00:09:42.642 Read (02h): Supported 00:09:42.642 Compare (05h): Supported 00:09:42.642 Write Zeroes (08h): Supported LBA-Change 00:09:42.642 Dataset Management (09h): Supported LBA-Change 00:09:42.642 Unknown (0Ch): Supported 00:09:42.642 Unknown (12h): Supported 00:09:42.642 Copy (19h): Supported LBA-Change 00:09:42.642 Unknown (1Dh): Supported LBA-Change 00:09:42.642 00:09:42.642 Error Log 00:09:42.642 ========= 00:09:42.642 00:09:42.642 Arbitration 00:09:42.642 =========== 00:09:42.642 Arbitration Burst: no limit 00:09:42.642 00:09:42.642 Power Management 00:09:42.642 ================ 00:09:42.642 Number of Power States: 1 00:09:42.642 Current Power State: Power State #0 00:09:42.642 Power State #0: 00:09:42.642 Max Power: 25.00 W 00:09:42.642 Non-Operational State: Operational 00:09:42.642 Entry Latency: 16 microseconds 00:09:42.642 Exit Latency: 4 microseconds 00:09:42.642 Relative Read Throughput: 0 00:09:42.642 Relative Read Latency: 0 00:09:42.642 Relative Write Throughput: 0 00:09:42.642 Relative Write Latency: 0 00:09:42.642 Idle Power: Not Reported 00:09:42.642 Active Power: Not Reported 00:09:42.642 Non-Operational Permissive Mode: Not Supported 00:09:42.642 00:09:42.642 Health Information 00:09:42.642 ================== 00:09:42.642 Critical Warnings: 00:09:42.642 Available Spare Space: OK 00:09:42.642 Temperature: OK 00:09:42.642 Device Reliability: OK 00:09:42.642 Read Only: No 00:09:42.642 Volatile Memory Backup: OK 00:09:42.642 Current Temperature: 323 Kelvin (50 Celsius) 00:09:42.642 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:42.642 Available Spare: 0% 00:09:42.642 Available Spare Threshold: 0% 00:09:42.642 Life Percentage Used: 0% 00:09:42.642 Data Units Read: 4240 00:09:42.642 Data Units Written: 1957 00:09:42.642 Host Read Commands: 190473 00:09:42.642 Host Write Commands: 93441 00:09:42.642 Controller Busy Time: 0 minutes 00:09:42.642 Power Cycles: 0 00:09:42.642 Power On Hours: 0 hours 00:09:42.642 Unsafe Shutdowns: 0 00:09:42.642 Unrecoverable Media Errors: 0 00:09:42.642 Lifetime Error Log Entries: 0 00:09:42.642 Warning Temperature Time: 0 minutes 00:09:42.642 Critical Temperature Time: 0 minutes 00:09:42.642 00:09:42.642 Number of Queues 00:09:42.642 ================ 00:09:42.642 Number of I/O Submission Queues: 64 00:09:42.642 Number of I/O Completion Queues: 64 00:09:42.642 00:09:42.642 ZNS Specific Controller Data 00:09:42.642 ============================ 00:09:42.642 Zone Append Size Limit: 0 00:09:42.642 00:09:42.642 00:09:42.642 Active Namespaces 00:09:42.642 ================= 00:09:42.642 Namespace ID:1 00:09:42.642 Error Recovery Timeout: Unlimited 00:09:42.642 Command Set Identifier: NVM (00h) 00:09:42.642 Deallocate: Supported 00:09:42.642 Deallocated/Unwritten Error: Supported 00:09:42.642 Deallocated Read Value: All 0x00 00:09:42.642 Deallocate in Write Zeroes: Not Supported 00:09:42.642 Deallocated Guard Field: 0xFFFF 00:09:42.642 Flush: Supported 00:09:42.642 Reservation: Not Supported 00:09:42.642 Namespace Sharing Capabilities: Private 00:09:42.642 Size (in LBAs): 1048576 (4GiB) 00:09:42.642 Capacity (in LBAs): 1048576 (4GiB) 00:09:42.642 Utilization (in LBAs): 1048576 (4GiB) 00:09:42.642 Thin Provisioning: Not Supported 00:09:42.642 Per-NS Atomic Units: No 00:09:42.642 Maximum Single Source Range Length: 128 00:09:42.642 Maximum Copy Length: 128 00:09:42.642 Maximum Source Range Count: 128 00:09:42.642 NGUID/EUI64 Never Reused: No 00:09:42.642 Namespace Write Protected: No 00:09:42.642 Number of LBA Formats: 8 00:09:42.642 Current LBA Format: LBA Format #04 00:09:42.642 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:42.642 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:42.642 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:42.642 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:42.642 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:42.642 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:42.642 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:42.642 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:42.642 00:09:42.642 Namespace ID:2 00:09:42.642 Error Recovery Timeout: Unlimited 00:09:42.642 Command Set Identifier: NVM (00h) 00:09:42.642 Deallocate: Supported 00:09:42.642 Deallocated/Unwritten Error: Supported 00:09:42.642 Deallocated Read Value: All 0x00 00:09:42.642 Deallocate in Write Zeroes: Not Supported 00:09:42.642 Deallocated Guard Field: 0xFFFF 00:09:42.642 Flush: Supported 00:09:42.642 Reservation: Not Supported 00:09:42.642 Namespace Sharing Capabilities: Private 00:09:42.642 Size (in LBAs): 1048576 (4GiB) 00:09:42.642 Capacity (in LBAs): 1048576 (4GiB) 00:09:42.643 Utilization (in LBAs): 1048576 (4GiB) 00:09:42.643 Thin Provisioning: Not Supported 00:09:42.643 Per-NS Atomic Units: No 00:09:42.643 Maximum Single Source Range Length: 128 00:09:42.643 Maximum Copy Length: 128 00:09:42.643 Maximum Source Range Count: 128 00:09:42.643 NGUID/EUI64 Never Reused: No 00:09:42.643 Namespace Write Protected: No 00:09:42.643 Number of LBA Formats: 8 00:09:42.643 Current LBA Format: LBA Format #04 00:09:42.643 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:42.643 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:42.643 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:42.643 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:42.643 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:42.643 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:42.643 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:42.643 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:42.643 00:09:42.643 Namespace ID:3 00:09:42.643 Error Recovery Timeout: Unlimited 00:09:42.643 Command Set Identifier: NVM (00h) 00:09:42.643 Deallocate: Supported 00:09:42.643 Deallocated/Unwritten Error: Supported 00:09:42.643 Deallocated Read Value: All 0x00 00:09:42.643 Deallocate in Write Zeroes: Not Supported 00:09:42.643 Deallocated Guard Field: 0xFFFF 00:09:42.643 Flush: Supported 00:09:42.643 Reservation: Not Supported 00:09:42.643 Namespace Sharing Capabilities: Private 00:09:42.643 Size (in LBAs): 1048576 (4GiB) 00:09:42.643 Capacity (in LBAs): 1048576 (4GiB) 00:09:42.643 Utilization (in LBAs): 1048576 (4GiB) 00:09:42.643 Thin Provisioning: Not Supported 00:09:42.643 Per-NS Atomic Units: No 00:09:42.643 Maximum Single Source Range Length: 128 00:09:42.643 Maximum Copy Length: 128 00:09:42.643 Maximum Source Range Count: 128 00:09:42.643 NGUID/EUI64 Never Reused: No 00:09:42.643 Namespace Write Protected: No 00:09:42.643 Number of LBA Formats: 8 00:09:42.643 Current LBA Format: LBA Format #04 00:09:42.643 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:42.643 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:42.643 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:42.643 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:42.643 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:42.643 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:42.643 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:42.643 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:42.643 00:09:42.643 14:04:45 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:42.643 14:04:45 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' -i 0 00:09:42.904 ===================================================== 00:09:42.904 NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:42.904 ===================================================== 00:09:42.904 Controller Capabilities/Features 00:09:42.904 ================================ 00:09:42.904 Vendor ID: 1b36 00:09:42.904 Subsystem Vendor ID: 1af4 00:09:42.904 Serial Number: 12343 00:09:42.904 Model Number: QEMU NVMe Ctrl 00:09:42.904 Firmware Version: 8.0.0 00:09:42.904 Recommended Arb Burst: 6 00:09:42.904 IEEE OUI Identifier: 00 54 52 00:09:42.904 Multi-path I/O 00:09:42.904 May have multiple subsystem ports: No 00:09:42.904 May have multiple controllers: Yes 00:09:42.904 Associated with SR-IOV VF: No 00:09:42.904 Max Data Transfer Size: 524288 00:09:42.904 Max Number of Namespaces: 256 00:09:42.904 Max Number of I/O Queues: 64 00:09:42.904 NVMe Specification Version (VS): 1.4 00:09:42.904 NVMe Specification Version (Identify): 1.4 00:09:42.904 Maximum Queue Entries: 2048 00:09:42.904 Contiguous Queues Required: Yes 00:09:42.904 Arbitration Mechanisms Supported 00:09:42.904 Weighted Round Robin: Not Supported 00:09:42.904 Vendor Specific: Not Supported 00:09:42.904 Reset Timeout: 7500 ms 00:09:42.904 Doorbell Stride: 4 bytes 00:09:42.904 NVM Subsystem Reset: Not Supported 00:09:42.904 Command Sets Supported 00:09:42.904 NVM Command Set: Supported 00:09:42.904 Boot Partition: Not Supported 00:09:42.904 Memory Page Size Minimum: 4096 bytes 00:09:42.904 Memory Page Size Maximum: 65536 bytes 00:09:42.904 Persistent Memory Region: Not Supported 00:09:42.904 Optional Asynchronous Events Supported 00:09:42.904 Namespace Attribute Notices: Supported 00:09:42.904 Firmware Activation Notices: Not Supported 00:09:42.904 ANA Change Notices: Not Supported 00:09:42.904 PLE Aggregate Log Change Notices: Not Supported 00:09:42.904 LBA Status Info Alert Notices: Not Supported 00:09:42.904 EGE Aggregate Log Change Notices: Not Supported 00:09:42.904 Normal NVM Subsystem Shutdown event: Not Supported 00:09:42.904 Zone Descriptor Change Notices: Not Supported 00:09:42.904 Discovery Log Change Notices: Not Supported 00:09:42.904 Controller Attributes 00:09:42.904 128-bit Host Identifier: Not Supported 00:09:42.905 Non-Operational Permissive Mode: Not Supported 00:09:42.905 NVM Sets: Not Supported 00:09:42.905 Read Recovery Levels: Not Supported 00:09:42.905 Endurance Groups: Supported 00:09:42.905 Predictable Latency Mode: Not Supported 00:09:42.905 Traffic Based Keep ALive: Not Supported 00:09:42.905 Namespace Granularity: Not Supported 00:09:42.905 SQ Associations: Not Supported 00:09:42.905 UUID List: Not Supported 00:09:42.905 Multi-Domain Subsystem: Not Supported 00:09:42.905 Fixed Capacity Management: Not Supported 00:09:42.905 Variable Capacity Management: Not Supported 00:09:42.905 Delete Endurance Group: Not Supported 00:09:42.905 Delete NVM Set: Not Supported 00:09:42.905 Extended LBA Formats Supported: Supported 00:09:42.905 Flexible Data Placement Supported: Supported 00:09:42.905 00:09:42.905 Controller Memory Buffer Support 00:09:42.905 ================================ 00:09:42.905 Supported: No 00:09:42.905 00:09:42.905 Persistent Memory Region Support 00:09:42.905 ================================ 00:09:42.905 Supported: No 00:09:42.905 00:09:42.905 Admin Command Set Attributes 00:09:42.905 ============================ 00:09:42.905 Security Send/Receive: Not Supported 00:09:42.905 Format NVM: Supported 00:09:42.905 Firmware Activate/Download: Not Supported 00:09:42.905 Namespace Management: Supported 00:09:42.905 Device Self-Test: Not Supported 00:09:42.905 Directives: Supported 00:09:42.905 NVMe-MI: Not Supported 00:09:42.905 Virtualization Management: Not Supported 00:09:42.905 Doorbell Buffer Config: Supported 00:09:42.905 Get LBA Status Capability: Not Supported 00:09:42.905 Command & Feature Lockdown Capability: Not Supported 00:09:42.905 Abort Command Limit: 4 00:09:42.905 Async Event Request Limit: 4 00:09:42.905 Number of Firmware Slots: N/A 00:09:42.905 Firmware Slot 1 Read-Only: N/A 00:09:42.905 Firmware Activation Without Reset: N/A 00:09:42.905 Multiple Update Detection Support: N/A 00:09:42.905 Firmware Update Granularity: No Information Provided 00:09:42.905 Per-Namespace SMART Log: Yes 00:09:42.905 Asymmetric Namespace Access Log Page: Not Supported 00:09:42.905 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:42.905 Command Effects Log Page: Supported 00:09:42.905 Get Log Page Extended Data: Supported 00:09:42.905 Telemetry Log Pages: Not Supported 00:09:42.905 Persistent Event Log Pages: Not Supported 00:09:42.905 Supported Log Pages Log Page: May Support 00:09:42.905 Commands Supported & Effects Log Page: Not Supported 00:09:42.905 Feature Identifiers & Effects Log Page:May Support 00:09:42.905 NVMe-MI Commands & Effects Log Page: May Support 00:09:42.905 Data Area 4 for Telemetry Log: Not Supported 00:09:42.905 Error Log Page Entries Supported: 1 00:09:42.905 Keep Alive: Not Supported 00:09:42.905 00:09:42.905 NVM Command Set Attributes 00:09:42.905 ========================== 00:09:42.905 Submission Queue Entry Size 00:09:42.905 Max: 64 00:09:42.905 Min: 64 00:09:42.905 Completion Queue Entry Size 00:09:42.905 Max: 16 00:09:42.905 Min: 16 00:09:42.905 Number of Namespaces: 256 00:09:42.905 Compare Command: Supported 00:09:42.905 Write Uncorrectable Command: Not Supported 00:09:42.905 Dataset Management Command: Supported 00:09:42.905 Write Zeroes Command: Supported 00:09:42.905 Set Features Save Field: Supported 00:09:42.905 Reservations: Not Supported 00:09:42.905 Timestamp: Supported 00:09:42.905 Copy: Supported 00:09:42.905 Volatile Write Cache: Present 00:09:42.905 Atomic Write Unit (Normal): 1 00:09:42.905 Atomic Write Unit (PFail): 1 00:09:42.905 Atomic Compare & Write Unit: 1 00:09:42.905 Fused Compare & Write: Not Supported 00:09:42.905 Scatter-Gather List 00:09:42.905 SGL Command Set: Supported 00:09:42.905 SGL Keyed: Not Supported 00:09:42.905 SGL Bit Bucket Descriptor: Not Supported 00:09:42.905 SGL Metadata Pointer: Not Supported 00:09:42.905 Oversized SGL: Not Supported 00:09:42.905 SGL Metadata Address: Not Supported 00:09:42.905 SGL Offset: Not Supported 00:09:42.905 Transport SGL Data Block: Not Supported 00:09:42.905 Replay Protected Memory Block: Not Supported 00:09:42.905 00:09:42.905 Firmware Slot Information 00:09:42.905 ========================= 00:09:42.905 Active slot: 1 00:09:42.905 Slot 1 Firmware Revision: 1.0 00:09:42.905 00:09:42.905 00:09:42.905 Commands Supported and Effects 00:09:42.905 ============================== 00:09:42.905 Admin Commands 00:09:42.905 -------------- 00:09:42.905 Delete I/O Submission Queue (00h): Supported 00:09:42.905 Create I/O Submission Queue (01h): Supported 00:09:42.905 Get Log Page (02h): Supported 00:09:42.905 Delete I/O Completion Queue (04h): Supported 00:09:42.905 Create I/O Completion Queue (05h): Supported 00:09:42.905 Identify (06h): Supported 00:09:42.905 Abort (08h): Supported 00:09:42.905 Set Features (09h): Supported 00:09:42.905 Get Features (0Ah): Supported 00:09:42.905 Asynchronous Event Request (0Ch): Supported 00:09:42.905 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:42.905 Directive Send (19h): Supported 00:09:42.905 Directive Receive (1Ah): Supported 00:09:42.905 Virtualization Management (1Ch): Supported 00:09:42.905 Doorbell Buffer Config (7Ch): Supported 00:09:42.905 Format NVM (80h): Supported LBA-Change 00:09:42.905 I/O Commands 00:09:42.905 ------------ 00:09:42.905 Flush (00h): Supported LBA-Change 00:09:42.905 Write (01h): Supported LBA-Change 00:09:42.905 Read (02h): Supported 00:09:42.905 Compare (05h): Supported 00:09:42.905 Write Zeroes (08h): Supported LBA-Change 00:09:42.905 Dataset Management (09h): Supported LBA-Change 00:09:42.905 Unknown (0Ch): Supported 00:09:42.905 Unknown (12h): Supported 00:09:42.905 Copy (19h): Supported LBA-Change 00:09:42.905 Unknown (1Dh): Supported LBA-Change 00:09:42.905 00:09:42.905 Error Log 00:09:42.905 ========= 00:09:42.905 00:09:42.905 Arbitration 00:09:42.905 =========== 00:09:42.905 Arbitration Burst: no limit 00:09:42.905 00:09:42.905 Power Management 00:09:42.905 ================ 00:09:42.905 Number of Power States: 1 00:09:42.905 Current Power State: Power State #0 00:09:42.905 Power State #0: 00:09:42.905 Max Power: 25.00 W 00:09:42.905 Non-Operational State: Operational 00:09:42.905 Entry Latency: 16 microseconds 00:09:42.905 Exit Latency: 4 microseconds 00:09:42.905 Relative Read Throughput: 0 00:09:42.905 Relative Read Latency: 0 00:09:42.905 Relative Write Throughput: 0 00:09:42.905 Relative Write Latency: 0 00:09:42.905 Idle Power: Not Reported 00:09:42.905 Active Power: Not Reported 00:09:42.905 Non-Operational Permissive Mode: Not Supported 00:09:42.905 00:09:42.905 Health Information 00:09:42.905 ================== 00:09:42.905 Critical Warnings: 00:09:42.905 Available Spare Space: OK 00:09:42.905 Temperature: OK 00:09:42.905 Device Reliability: OK 00:09:42.905 Read Only: No 00:09:42.905 Volatile Memory Backup: OK 00:09:42.905 Current Temperature: 323 Kelvin (50 Celsius) 00:09:42.905 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:42.905 Available Spare: 0% 00:09:42.905 Available Spare Threshold: 0% 00:09:42.905 Life Percentage Used: 0% 00:09:42.905 Data Units Read: 1531 00:09:42.905 Data Units Written: 716 00:09:42.905 Host Read Commands: 64304 00:09:42.905 Host Write Commands: 31639 00:09:42.905 Controller Busy Time: 0 minutes 00:09:42.905 Power Cycles: 0 00:09:42.905 Power On Hours: 0 hours 00:09:42.905 Unsafe Shutdowns: 0 00:09:42.905 Unrecoverable Media Errors: 0 00:09:42.905 Lifetime Error Log Entries: 0 00:09:42.905 Warning Temperature Time: 0 minutes 00:09:42.905 Critical Temperature Time: 0 minutes 00:09:42.905 00:09:42.905 Number of Queues 00:09:42.905 ================ 00:09:42.905 Number of I/O Submission Queues: 64 00:09:42.905 Number of I/O Completion Queues: 64 00:09:42.905 00:09:42.905 ZNS Specific Controller Data 00:09:42.905 ============================ 00:09:42.905 Zone Append Size Limit: 0 00:09:42.905 00:09:42.905 00:09:42.905 Active Namespaces 00:09:42.905 ================= 00:09:42.905 Namespace ID:1 00:09:42.905 Error Recovery Timeout: Unlimited 00:09:42.905 Command Set Identifier: NVM (00h) 00:09:42.905 Deallocate: Supported 00:09:42.905 Deallocated/Unwritten Error: Supported 00:09:42.905 Deallocated Read Value: All 0x00 00:09:42.905 Deallocate in Write Zeroes: Not Supported 00:09:42.905 Deallocated Guard Field: 0xFFFF 00:09:42.905 Flush: Supported 00:09:42.905 Reservation: Not Supported 00:09:42.905 Namespace Sharing Capabilities: Multiple Controllers 00:09:42.905 Size (in LBAs): 262144 (1GiB) 00:09:42.905 Capacity (in LBAs): 262144 (1GiB) 00:09:42.905 Utilization (in LBAs): 262144 (1GiB) 00:09:42.905 Thin Provisioning: Not Supported 00:09:42.906 Per-NS Atomic Units: No 00:09:42.906 Maximum Single Source Range Length: 128 00:09:42.906 Maximum Copy Length: 128 00:09:42.906 Maximum Source Range Count: 128 00:09:42.906 NGUID/EUI64 Never Reused: No 00:09:42.906 Namespace Write Protected: No 00:09:42.906 Endurance group ID: 1 00:09:42.906 Number of LBA Formats: 8 00:09:42.906 Current LBA Format: LBA Format #04 00:09:42.906 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:42.906 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:42.906 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:42.906 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:42.906 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:42.906 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:42.906 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:42.906 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:42.906 00:09:42.906 Get Feature FDP: 00:09:42.906 ================ 00:09:42.906 Enabled: Yes 00:09:42.906 FDP configuration index: 0 00:09:42.906 00:09:42.906 FDP configurations log page 00:09:42.906 =========================== 00:09:42.906 Number of FDP configurations: 1 00:09:42.906 Version: 0 00:09:42.906 Size: 112 00:09:42.906 FDP Configuration Descriptor: 0 00:09:42.906 Descriptor Size: 96 00:09:42.906 Reclaim Group Identifier format: 2 00:09:42.906 FDP Volatile Write Cache: Not Present 00:09:42.906 FDP Configuration: Valid 00:09:42.906 Vendor Specific Size: 0 00:09:42.906 Number of Reclaim Groups: 2 00:09:42.906 Number of Recalim Unit Handles: 8 00:09:42.906 Max Placement Identifiers: 128 00:09:42.906 Number of Namespaces Suppprted: 256 00:09:42.906 Reclaim unit Nominal Size: 6000000 bytes 00:09:42.906 Estimated Reclaim Unit Time Limit: Not Reported 00:09:42.906 RUH Desc #000: RUH Type: Initially Isolated 00:09:42.906 RUH Desc #001: RUH Type: Initially Isolated 00:09:42.906 RUH Desc #002: RUH Type: Initially Isolated 00:09:42.906 RUH Desc #003: RUH Type: Initially Isolated 00:09:42.906 RUH Desc #004: RUH Type: Initially Isolated 00:09:42.906 RUH Desc #005: RUH Type: Initially Isolated 00:09:42.906 RUH Desc #006: RUH Type: Initially Isolated 00:09:42.906 RUH Desc #007: RUH Type: Initially Isolated 00:09:42.906 00:09:42.906 FDP reclaim unit handle usage log page 00:09:42.906 ====================================== 00:09:42.906 Number of Reclaim Unit Handles: 8 00:09:42.906 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:42.906 RUH Usage Desc #001: RUH Attributes: Unused 00:09:42.906 RUH Usage Desc #002: RUH Attributes: Unused 00:09:42.906 RUH Usage Desc #003: RUH Attributes: Unused 00:09:42.906 RUH Usage Desc #004: RUH Attributes: Unused 00:09:42.906 RUH Usage Desc #005: RUH Attributes: Unused 00:09:42.906 RUH Usage Desc #006: RUH Attributes: Unused 00:09:42.906 RUH Usage Desc #007: RUH Attributes: Unused 00:09:42.906 00:09:42.906 FDP statistics log page 00:09:42.906 ======================= 00:09:42.906 Host bytes with metadata written: 474501120 00:09:42.906 Media bytes with metadata written: 474587136 00:09:42.906 Media bytes erased: 0 00:09:42.906 00:09:42.906 FDP events log page 00:09:42.906 =================== 00:09:42.906 Number of FDP events: 0 00:09:42.906 00:09:42.906 00:09:42.906 real 0m1.270s 00:09:42.906 user 0m0.384s 00:09:42.906 sys 0m0.650s 00:09:42.906 14:04:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:42.906 ************************************ 00:09:42.906 14:04:45 -- common/autotest_common.sh@10 -- # set +x 00:09:42.906 END TEST nvme_identify 00:09:42.906 ************************************ 00:09:42.906 14:04:45 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:09:42.906 14:04:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:42.906 14:04:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:42.906 14:04:45 -- common/autotest_common.sh@10 -- # set +x 00:09:42.906 ************************************ 00:09:42.906 START TEST nvme_perf 00:09:42.906 ************************************ 00:09:42.906 14:04:45 -- common/autotest_common.sh@1114 -- # nvme_perf 00:09:42.906 14:04:45 -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:09:44.293 Initializing NVMe Controllers 00:09:44.293 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:44.293 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:44.293 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:44.293 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:44.293 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:09:44.293 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:09:44.293 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:09:44.293 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:09:44.293 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:09:44.293 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:09:44.293 Initialization complete. Launching workers. 00:09:44.293 ======================================================== 00:09:44.293 Latency(us) 00:09:44.293 Device Information : IOPS MiB/s Average min max 00:09:44.293 PCIE (0000:00:06.0) NSID 1 from core 0: 11347.72 132.98 11270.84 5412.91 38077.19 00:09:44.293 PCIE (0000:00:07.0) NSID 1 from core 0: 11347.72 132.98 11255.76 5573.84 37955.47 00:09:44.293 PCIE (0000:00:09.0) NSID 1 from core 0: 11347.72 132.98 11238.21 5542.61 37849.63 00:09:44.293 PCIE (0000:00:08.0) NSID 1 from core 0: 11347.72 132.98 11220.84 5568.28 36742.91 00:09:44.293 PCIE (0000:00:08.0) NSID 2 from core 0: 11347.72 132.98 11203.43 5604.05 35697.03 00:09:44.293 PCIE (0000:00:08.0) NSID 3 from core 0: 11347.72 132.98 11186.41 5572.79 35258.23 00:09:44.293 ======================================================== 00:09:44.293 Total : 68086.33 797.89 11229.25 5412.91 38077.19 00:09:44.293 00:09:44.293 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:09:44.293 ================================================================================= 00:09:44.293 1.00000% : 5545.354us 00:09:44.293 10.00000% : 6326.745us 00:09:44.293 25.00000% : 7561.846us 00:09:44.293 50.00000% : 11342.769us 00:09:44.293 75.00000% : 13812.972us 00:09:44.293 90.00000% : 15325.342us 00:09:44.293 95.00000% : 16434.412us 00:09:44.293 98.00000% : 17845.957us 00:09:44.293 99.00000% : 32667.175us 00:09:44.293 99.50000% : 35490.265us 00:09:44.293 99.90000% : 37708.406us 00:09:44.293 99.99000% : 38111.705us 00:09:44.293 99.99900% : 38111.705us 00:09:44.293 99.99990% : 38111.705us 00:09:44.293 99.99999% : 38111.705us 00:09:44.293 00:09:44.293 Summary latency data for PCIE (0000:00:07.0) NSID 1 from core 0: 00:09:44.293 ================================================================================= 00:09:44.293 1.00000% : 5696.591us 00:09:44.293 10.00000% : 6377.157us 00:09:44.293 25.00000% : 7461.022us 00:09:44.293 50.00000% : 11292.357us 00:09:44.293 75.00000% : 13812.972us 00:09:44.293 90.00000% : 15123.692us 00:09:44.293 95.00000% : 16031.114us 00:09:44.293 98.00000% : 18249.255us 00:09:44.293 99.00000% : 33070.474us 00:09:44.293 99.50000% : 35490.265us 00:09:44.293 99.90000% : 37506.757us 00:09:44.293 99.99000% : 38111.705us 00:09:44.293 99.99900% : 38111.705us 00:09:44.293 99.99990% : 38111.705us 00:09:44.293 99.99999% : 38111.705us 00:09:44.293 00:09:44.293 Summary latency data for PCIE (0000:00:09.0) NSID 1 from core 0: 00:09:44.293 ================================================================================= 00:09:44.293 1.00000% : 5696.591us 00:09:44.293 10.00000% : 6351.951us 00:09:44.293 25.00000% : 7410.609us 00:09:44.293 50.00000% : 11241.945us 00:09:44.293 75.00000% : 13712.148us 00:09:44.293 90.00000% : 15123.692us 00:09:44.293 95.00000% : 16232.763us 00:09:44.293 98.00000% : 18450.905us 00:09:44.293 99.00000% : 34280.369us 00:09:44.293 99.50000% : 36095.212us 00:09:44.293 99.90000% : 37506.757us 00:09:44.293 99.99000% : 37910.055us 00:09:44.293 99.99900% : 37910.055us 00:09:44.293 99.99990% : 37910.055us 00:09:44.293 99.99999% : 37910.055us 00:09:44.293 00:09:44.293 Summary latency data for PCIE (0000:00:08.0) NSID 1 from core 0: 00:09:44.293 ================================================================================= 00:09:44.293 1.00000% : 5696.591us 00:09:44.293 10.00000% : 6377.157us 00:09:44.293 25.00000% : 7410.609us 00:09:44.293 50.00000% : 11191.532us 00:09:44.293 75.00000% : 13812.972us 00:09:44.293 90.00000% : 15123.692us 00:09:44.293 95.00000% : 16131.938us 00:09:44.293 98.00000% : 17946.782us 00:09:44.293 99.00000% : 33070.474us 00:09:44.293 99.50000% : 34885.317us 00:09:44.293 99.90000% : 36498.511us 00:09:44.293 99.99000% : 36901.809us 00:09:44.293 99.99900% : 36901.809us 00:09:44.293 99.99990% : 36901.809us 00:09:44.293 99.99999% : 36901.809us 00:09:44.293 00:09:44.293 Summary latency data for PCIE (0000:00:08.0) NSID 2 from core 0: 00:09:44.293 ================================================================================= 00:09:44.293 1.00000% : 5671.385us 00:09:44.293 10.00000% : 6377.157us 00:09:44.293 25.00000% : 7410.609us 00:09:44.293 50.00000% : 11191.532us 00:09:44.293 75.00000% : 13812.972us 00:09:44.293 90.00000% : 15325.342us 00:09:44.293 95.00000% : 16232.763us 00:09:44.293 98.00000% : 17745.132us 00:09:44.293 99.00000% : 32062.228us 00:09:44.293 99.50000% : 33877.071us 00:09:44.293 99.90000% : 35490.265us 00:09:44.293 99.99000% : 35691.914us 00:09:44.293 99.99900% : 35893.563us 00:09:44.293 99.99990% : 35893.563us 00:09:44.293 99.99999% : 35893.563us 00:09:44.293 00:09:44.293 Summary latency data for PCIE (0000:00:08.0) NSID 3 from core 0: 00:09:44.293 ================================================================================= 00:09:44.293 1.00000% : 5671.385us 00:09:44.293 10.00000% : 6351.951us 00:09:44.293 25.00000% : 7410.609us 00:09:44.293 50.00000% : 11141.120us 00:09:44.293 75.00000% : 13712.148us 00:09:44.293 90.00000% : 15123.692us 00:09:44.293 95.00000% : 16434.412us 00:09:44.293 98.00000% : 17745.132us 00:09:44.293 99.00000% : 31658.929us 00:09:44.293 99.50000% : 33473.772us 00:09:44.293 99.90000% : 35086.966us 00:09:44.293 99.99000% : 35288.615us 00:09:44.293 99.99900% : 35288.615us 00:09:44.293 99.99990% : 35288.615us 00:09:44.293 99.99999% : 35288.615us 00:09:44.293 00:09:44.293 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:09:44.293 ============================================================================== 00:09:44.293 Range in us Cumulative IO count 00:09:44.293 5394.117 - 5419.323: 0.0263% ( 3) 00:09:44.293 5419.323 - 5444.529: 0.1404% ( 13) 00:09:44.293 5444.529 - 5469.735: 0.3775% ( 27) 00:09:44.293 5469.735 - 5494.942: 0.6496% ( 31) 00:09:44.293 5494.942 - 5520.148: 0.8690% ( 25) 00:09:44.293 5520.148 - 5545.354: 1.2202% ( 40) 00:09:44.293 5545.354 - 5570.560: 1.5362% ( 36) 00:09:44.293 5570.560 - 5595.766: 1.7995% ( 30) 00:09:44.293 5595.766 - 5620.972: 2.1067% ( 35) 00:09:44.293 5620.972 - 5646.178: 2.3701% ( 30) 00:09:44.293 5646.178 - 5671.385: 2.6246% ( 29) 00:09:44.293 5671.385 - 5696.591: 2.9670% ( 39) 00:09:44.293 5696.591 - 5721.797: 3.2040% ( 27) 00:09:44.293 5721.797 - 5747.003: 3.4673% ( 30) 00:09:44.293 5747.003 - 5772.209: 3.7921% ( 37) 00:09:44.293 5772.209 - 5797.415: 4.0291% ( 27) 00:09:44.293 5797.415 - 5822.622: 4.3364% ( 35) 00:09:44.293 5822.622 - 5847.828: 4.6261% ( 33) 00:09:44.293 5847.828 - 5873.034: 4.8455% ( 25) 00:09:44.293 5873.034 - 5898.240: 5.1879% ( 39) 00:09:44.293 5898.240 - 5923.446: 5.4336% ( 28) 00:09:44.293 5923.446 - 5948.652: 5.7496% ( 36) 00:09:44.293 5948.652 - 5973.858: 6.0042% ( 29) 00:09:44.293 5973.858 - 5999.065: 6.3466% ( 39) 00:09:44.293 5999.065 - 6024.271: 6.6626% ( 36) 00:09:44.293 6024.271 - 6049.477: 6.9347% ( 31) 00:09:44.293 6049.477 - 6074.683: 7.2858% ( 40) 00:09:44.293 6074.683 - 6099.889: 7.5228% ( 27) 00:09:44.293 6099.889 - 6125.095: 7.8125% ( 33) 00:09:44.293 6125.095 - 6150.302: 8.1285% ( 36) 00:09:44.293 6150.302 - 6175.508: 8.4094% ( 32) 00:09:44.293 6175.508 - 6200.714: 8.7254% ( 36) 00:09:44.293 6200.714 - 6225.920: 8.9800% ( 29) 00:09:44.293 6225.920 - 6251.126: 9.3136% ( 38) 00:09:44.293 6251.126 - 6276.332: 9.5769% ( 30) 00:09:44.293 6276.332 - 6301.538: 9.9017% ( 37) 00:09:44.293 6301.538 - 6326.745: 10.1562% ( 29) 00:09:44.293 6326.745 - 6351.951: 10.4459% ( 33) 00:09:44.293 6351.951 - 6377.157: 10.7883% ( 39) 00:09:44.293 6377.157 - 6402.363: 11.0779% ( 33) 00:09:44.293 6402.363 - 6427.569: 11.3764% ( 34) 00:09:44.293 6427.569 - 6452.775: 11.6397% ( 30) 00:09:44.293 6452.775 - 6503.188: 12.2542% ( 70) 00:09:44.293 6503.188 - 6553.600: 12.8950% ( 73) 00:09:44.293 6553.600 - 6604.012: 13.5270% ( 72) 00:09:44.293 6604.012 - 6654.425: 14.1327% ( 69) 00:09:44.293 6654.425 - 6704.837: 14.7384% ( 69) 00:09:44.293 6704.837 - 6755.249: 15.3353% ( 68) 00:09:44.293 6755.249 - 6805.662: 15.9498% ( 70) 00:09:44.293 6805.662 - 6856.074: 16.5643% ( 70) 00:09:44.294 6856.074 - 6906.486: 17.2051% ( 73) 00:09:44.294 6906.486 - 6956.898: 17.7932% ( 67) 00:09:44.294 6956.898 - 7007.311: 18.4603% ( 76) 00:09:44.294 7007.311 - 7057.723: 19.0572% ( 68) 00:09:44.294 7057.723 - 7108.135: 19.6541% ( 68) 00:09:44.294 7108.135 - 7158.548: 20.2423% ( 67) 00:09:44.294 7158.548 - 7208.960: 20.8743% ( 72) 00:09:44.294 7208.960 - 7259.372: 21.5063% ( 72) 00:09:44.294 7259.372 - 7309.785: 22.1383% ( 72) 00:09:44.294 7309.785 - 7360.197: 22.7353% ( 68) 00:09:44.294 7360.197 - 7410.609: 23.3585% ( 71) 00:09:44.294 7410.609 - 7461.022: 23.9554% ( 68) 00:09:44.294 7461.022 - 7511.434: 24.5874% ( 72) 00:09:44.294 7511.434 - 7561.846: 25.2019% ( 70) 00:09:44.294 7561.846 - 7612.258: 25.6496% ( 51) 00:09:44.294 7612.258 - 7662.671: 25.8690% ( 25) 00:09:44.294 7662.671 - 7713.083: 26.0183% ( 17) 00:09:44.294 7713.083 - 7763.495: 26.1060% ( 10) 00:09:44.294 7763.495 - 7813.908: 26.1938% ( 10) 00:09:44.294 7813.908 - 7864.320: 26.2904% ( 11) 00:09:44.294 7864.320 - 7914.732: 26.4045% ( 13) 00:09:44.294 7914.732 - 7965.145: 26.4835% ( 9) 00:09:44.294 7965.145 - 8015.557: 26.5801% ( 11) 00:09:44.294 8015.557 - 8065.969: 26.6854% ( 12) 00:09:44.294 8065.969 - 8116.382: 26.7732% ( 10) 00:09:44.294 8116.382 - 8166.794: 26.8873% ( 13) 00:09:44.294 8166.794 - 8217.206: 26.9838% ( 11) 00:09:44.294 8217.206 - 8267.618: 27.0716% ( 10) 00:09:44.294 8267.618 - 8318.031: 27.1770% ( 12) 00:09:44.294 8318.031 - 8368.443: 27.2472% ( 8) 00:09:44.294 8368.443 - 8418.855: 27.3438% ( 11) 00:09:44.294 8418.855 - 8469.268: 27.4666% ( 14) 00:09:44.294 8469.268 - 8519.680: 27.5544% ( 10) 00:09:44.294 8519.680 - 8570.092: 27.7563% ( 23) 00:09:44.294 8570.092 - 8620.505: 27.8529% ( 11) 00:09:44.294 8620.505 - 8670.917: 27.9407% ( 10) 00:09:44.294 8670.917 - 8721.329: 28.0636% ( 14) 00:09:44.294 8721.329 - 8771.742: 28.2128% ( 17) 00:09:44.294 8771.742 - 8822.154: 28.3269% ( 13) 00:09:44.294 8822.154 - 8872.566: 28.5639% ( 27) 00:09:44.294 8872.566 - 8922.978: 28.7131% ( 17) 00:09:44.294 8922.978 - 8973.391: 28.9326% ( 25) 00:09:44.294 8973.391 - 9023.803: 29.1081% ( 20) 00:09:44.294 9023.803 - 9074.215: 29.2574% ( 17) 00:09:44.294 9074.215 - 9124.628: 29.4593% ( 23) 00:09:44.294 9124.628 - 9175.040: 29.6348% ( 20) 00:09:44.294 9175.040 - 9225.452: 29.9070% ( 31) 00:09:44.294 9225.452 - 9275.865: 30.1703% ( 30) 00:09:44.294 9275.865 - 9326.277: 30.4424% ( 31) 00:09:44.294 9326.277 - 9376.689: 30.8199% ( 43) 00:09:44.294 9376.689 - 9427.102: 31.1359% ( 36) 00:09:44.294 9427.102 - 9477.514: 31.4519% ( 36) 00:09:44.294 9477.514 - 9527.926: 31.8294% ( 43) 00:09:44.294 9527.926 - 9578.338: 32.2595% ( 49) 00:09:44.294 9578.338 - 9628.751: 32.6545% ( 45) 00:09:44.294 9628.751 - 9679.163: 32.9617% ( 35) 00:09:44.294 9679.163 - 9729.575: 33.4796% ( 59) 00:09:44.294 9729.575 - 9779.988: 33.9624% ( 55) 00:09:44.294 9779.988 - 9830.400: 34.3662% ( 46) 00:09:44.294 9830.400 - 9880.812: 34.8402% ( 54) 00:09:44.294 9880.812 - 9931.225: 35.2967% ( 52) 00:09:44.294 9931.225 - 9981.637: 35.9024% ( 69) 00:09:44.294 9981.637 - 10032.049: 36.4466% ( 62) 00:09:44.294 10032.049 - 10082.462: 36.9031% ( 52) 00:09:44.294 10082.462 - 10132.874: 37.4034% ( 57) 00:09:44.294 10132.874 - 10183.286: 37.9301% ( 60) 00:09:44.294 10183.286 - 10233.698: 38.4919% ( 64) 00:09:44.294 10233.698 - 10284.111: 39.0011% ( 58) 00:09:44.294 10284.111 - 10334.523: 39.5277% ( 60) 00:09:44.294 10334.523 - 10384.935: 40.0281% ( 57) 00:09:44.294 10384.935 - 10435.348: 40.6338% ( 69) 00:09:44.294 10435.348 - 10485.760: 41.3185% ( 78) 00:09:44.294 10485.760 - 10536.172: 41.8539% ( 61) 00:09:44.294 10536.172 - 10586.585: 42.3367% ( 55) 00:09:44.294 10586.585 - 10636.997: 42.8195% ( 55) 00:09:44.294 10636.997 - 10687.409: 43.3989% ( 66) 00:09:44.294 10687.409 - 10737.822: 43.9607% ( 64) 00:09:44.294 10737.822 - 10788.234: 44.5225% ( 64) 00:09:44.294 10788.234 - 10838.646: 45.0228% ( 57) 00:09:44.294 10838.646 - 10889.058: 45.5671% ( 62) 00:09:44.294 10889.058 - 10939.471: 46.1640% ( 68) 00:09:44.294 10939.471 - 10989.883: 46.6204% ( 52) 00:09:44.294 10989.883 - 11040.295: 47.2437% ( 71) 00:09:44.294 11040.295 - 11090.708: 47.7089% ( 53) 00:09:44.294 11090.708 - 11141.120: 48.0952% ( 44) 00:09:44.294 11141.120 - 11191.532: 48.6657% ( 65) 00:09:44.294 11191.532 - 11241.945: 49.2539% ( 67) 00:09:44.294 11241.945 - 11292.357: 49.7981% ( 62) 00:09:44.294 11292.357 - 11342.769: 50.3950% ( 68) 00:09:44.294 11342.769 - 11393.182: 50.9393% ( 62) 00:09:44.294 11393.182 - 11443.594: 51.4133% ( 54) 00:09:44.294 11443.594 - 11494.006: 51.9751% ( 64) 00:09:44.294 11494.006 - 11544.418: 52.6334% ( 75) 00:09:44.294 11544.418 - 11594.831: 52.9758% ( 39) 00:09:44.294 11594.831 - 11645.243: 53.4673% ( 56) 00:09:44.294 11645.243 - 11695.655: 54.0643% ( 68) 00:09:44.294 11695.655 - 11746.068: 54.5032% ( 50) 00:09:44.294 11746.068 - 11796.480: 55.0035% ( 57) 00:09:44.294 11796.480 - 11846.892: 55.6794% ( 77) 00:09:44.294 11846.892 - 11897.305: 56.4870% ( 92) 00:09:44.294 11897.305 - 11947.717: 57.0576% ( 65) 00:09:44.294 11947.717 - 11998.129: 57.5579% ( 57) 00:09:44.294 11998.129 - 12048.542: 58.0671% ( 58) 00:09:44.294 12048.542 - 12098.954: 58.5060% ( 50) 00:09:44.294 12098.954 - 12149.366: 59.0678% ( 64) 00:09:44.294 12149.366 - 12199.778: 59.4013% ( 38) 00:09:44.294 12199.778 - 12250.191: 59.7963% ( 45) 00:09:44.294 12250.191 - 12300.603: 60.2001% ( 46) 00:09:44.294 12300.603 - 12351.015: 60.7444% ( 62) 00:09:44.294 12351.015 - 12401.428: 61.3237% ( 66) 00:09:44.294 12401.428 - 12451.840: 61.7451% ( 48) 00:09:44.294 12451.840 - 12502.252: 62.2454% ( 57) 00:09:44.294 12502.252 - 12552.665: 62.6843% ( 50) 00:09:44.294 12552.665 - 12603.077: 63.2374% ( 63) 00:09:44.294 12603.077 - 12653.489: 63.6763% ( 50) 00:09:44.294 12653.489 - 12703.902: 64.1503% ( 54) 00:09:44.294 12703.902 - 12754.314: 64.7999% ( 74) 00:09:44.294 12754.314 - 12804.726: 65.2475% ( 51) 00:09:44.294 12804.726 - 12855.138: 65.9498% ( 80) 00:09:44.294 12855.138 - 12905.551: 66.3887% ( 50) 00:09:44.294 12905.551 - 13006.375: 67.2753% ( 101) 00:09:44.294 13006.375 - 13107.200: 68.3199% ( 119) 00:09:44.294 13107.200 - 13208.025: 69.3206% ( 114) 00:09:44.294 13208.025 - 13308.849: 70.5320% ( 138) 00:09:44.294 13308.849 - 13409.674: 71.5765% ( 119) 00:09:44.294 13409.674 - 13510.498: 72.5860% ( 115) 00:09:44.294 13510.498 - 13611.323: 73.6218% ( 118) 00:09:44.294 13611.323 - 13712.148: 74.5348% ( 104) 00:09:44.294 13712.148 - 13812.972: 75.8515% ( 150) 00:09:44.294 13812.972 - 13913.797: 76.8785% ( 117) 00:09:44.294 13913.797 - 14014.622: 77.8704% ( 113) 00:09:44.294 14014.622 - 14115.446: 78.7834% ( 104) 00:09:44.294 14115.446 - 14216.271: 79.9070% ( 128) 00:09:44.294 14216.271 - 14317.095: 81.0305% ( 128) 00:09:44.294 14317.095 - 14417.920: 81.8908% ( 98) 00:09:44.294 14417.920 - 14518.745: 83.0846% ( 136) 00:09:44.294 14518.745 - 14619.569: 84.0853% ( 114) 00:09:44.294 14619.569 - 14720.394: 85.0158% ( 106) 00:09:44.294 14720.394 - 14821.218: 85.9199% ( 103) 00:09:44.294 14821.218 - 14922.043: 86.8329% ( 104) 00:09:44.294 14922.043 - 15022.868: 87.7019% ( 99) 00:09:44.294 15022.868 - 15123.692: 88.6499% ( 108) 00:09:44.294 15123.692 - 15224.517: 89.3785% ( 83) 00:09:44.294 15224.517 - 15325.342: 90.1071% ( 83) 00:09:44.294 15325.342 - 15426.166: 90.8971% ( 90) 00:09:44.294 15426.166 - 15526.991: 91.6608% ( 87) 00:09:44.294 15526.991 - 15627.815: 92.1875% ( 60) 00:09:44.294 15627.815 - 15728.640: 92.6791% ( 56) 00:09:44.294 15728.640 - 15829.465: 93.0390% ( 41) 00:09:44.294 15829.465 - 15930.289: 93.4779% ( 50) 00:09:44.294 15930.289 - 16031.114: 93.8641% ( 44) 00:09:44.294 16031.114 - 16131.938: 94.2591% ( 45) 00:09:44.294 16131.938 - 16232.763: 94.6454% ( 44) 00:09:44.294 16232.763 - 16333.588: 94.9526% ( 35) 00:09:44.294 16333.588 - 16434.412: 95.1984% ( 28) 00:09:44.294 16434.412 - 16535.237: 95.4881% ( 33) 00:09:44.294 16535.237 - 16636.062: 95.7338% ( 28) 00:09:44.294 16636.062 - 16736.886: 96.0060% ( 31) 00:09:44.294 16736.886 - 16837.711: 96.2342% ( 26) 00:09:44.294 16837.711 - 16938.535: 96.4537% ( 25) 00:09:44.294 16938.535 - 17039.360: 96.7346% ( 32) 00:09:44.294 17039.360 - 17140.185: 96.9716% ( 27) 00:09:44.294 17140.185 - 17241.009: 97.1822% ( 24) 00:09:44.294 17241.009 - 17341.834: 97.3666% ( 21) 00:09:44.294 17341.834 - 17442.658: 97.5246% ( 18) 00:09:44.294 17442.658 - 17543.483: 97.6650% ( 16) 00:09:44.294 17543.483 - 17644.308: 97.7879% ( 14) 00:09:44.294 17644.308 - 17745.132: 97.9723% ( 21) 00:09:44.294 17745.132 - 17845.957: 98.0864% ( 13) 00:09:44.294 17845.957 - 17946.782: 98.2005% ( 13) 00:09:44.294 17946.782 - 18047.606: 98.2971% ( 11) 00:09:44.294 18047.606 - 18148.431: 98.4638% ( 19) 00:09:44.294 18148.431 - 18249.255: 98.5692% ( 12) 00:09:44.294 18249.255 - 18350.080: 98.6745% ( 12) 00:09:44.294 18350.080 - 18450.905: 98.7798% ( 12) 00:09:44.294 18450.905 - 18551.729: 98.8501% ( 8) 00:09:44.294 18551.729 - 18652.554: 98.8764% ( 3) 00:09:44.294 31860.578 - 32062.228: 98.8852% ( 1) 00:09:44.294 32062.228 - 32263.877: 98.9291% ( 5) 00:09:44.294 32263.877 - 32465.526: 98.9642% ( 4) 00:09:44.294 32465.526 - 32667.175: 99.0081% ( 5) 00:09:44.294 32667.175 - 32868.825: 99.0344% ( 3) 00:09:44.294 32868.825 - 33070.474: 99.0695% ( 4) 00:09:44.294 33070.474 - 33272.123: 99.1222% ( 6) 00:09:44.294 33272.123 - 33473.772: 99.1661% ( 5) 00:09:44.294 33473.772 - 33675.422: 99.1924% ( 3) 00:09:44.294 33675.422 - 33877.071: 99.2275% ( 4) 00:09:44.294 33877.071 - 34078.720: 99.2539% ( 3) 00:09:44.295 34078.720 - 34280.369: 99.2890% ( 4) 00:09:44.295 34280.369 - 34482.018: 99.3416% ( 6) 00:09:44.295 34482.018 - 34683.668: 99.3680% ( 3) 00:09:44.295 34683.668 - 34885.317: 99.4031% ( 4) 00:09:44.295 34885.317 - 35086.966: 99.4558% ( 6) 00:09:44.295 35086.966 - 35288.615: 99.4821% ( 3) 00:09:44.295 35288.615 - 35490.265: 99.5172% ( 4) 00:09:44.295 35490.265 - 35691.914: 99.5523% ( 4) 00:09:44.295 35691.914 - 35893.563: 99.5962% ( 5) 00:09:44.295 35893.563 - 36095.212: 99.6225% ( 3) 00:09:44.295 36095.212 - 36296.862: 99.6664% ( 5) 00:09:44.295 36296.862 - 36498.511: 99.7103% ( 5) 00:09:44.295 36498.511 - 36700.160: 99.7279% ( 2) 00:09:44.295 36700.160 - 36901.809: 99.7893% ( 7) 00:09:44.295 36901.809 - 37103.458: 99.8157% ( 3) 00:09:44.295 37103.458 - 37305.108: 99.8596% ( 5) 00:09:44.295 37305.108 - 37506.757: 99.8947% ( 4) 00:09:44.295 37506.757 - 37708.406: 99.9386% ( 5) 00:09:44.295 37708.406 - 37910.055: 99.9737% ( 4) 00:09:44.295 37910.055 - 38111.705: 100.0000% ( 3) 00:09:44.295 00:09:44.295 Latency histogram for PCIE (0000:00:07.0) NSID 1 from core 0: 00:09:44.295 ============================================================================== 00:09:44.295 Range in us Cumulative IO count 00:09:44.295 5570.560 - 5595.766: 0.0702% ( 8) 00:09:44.295 5595.766 - 5620.972: 0.1931% ( 14) 00:09:44.295 5620.972 - 5646.178: 0.4916% ( 34) 00:09:44.295 5646.178 - 5671.385: 0.9656% ( 54) 00:09:44.295 5671.385 - 5696.591: 1.3869% ( 48) 00:09:44.295 5696.591 - 5721.797: 1.7468% ( 41) 00:09:44.295 5721.797 - 5747.003: 2.0453% ( 34) 00:09:44.295 5747.003 - 5772.209: 2.2384% ( 22) 00:09:44.295 5772.209 - 5797.415: 2.4491% ( 24) 00:09:44.295 5797.415 - 5822.622: 2.8441% ( 45) 00:09:44.295 5822.622 - 5847.828: 3.2128% ( 42) 00:09:44.295 5847.828 - 5873.034: 3.5639% ( 40) 00:09:44.295 5873.034 - 5898.240: 3.8536% ( 33) 00:09:44.295 5898.240 - 5923.446: 4.1696% ( 36) 00:09:44.295 5923.446 - 5948.652: 4.4768% ( 35) 00:09:44.295 5948.652 - 5973.858: 4.8104% ( 38) 00:09:44.295 5973.858 - 5999.065: 5.1352% ( 37) 00:09:44.295 5999.065 - 6024.271: 5.4688% ( 38) 00:09:44.295 6024.271 - 6049.477: 5.7848% ( 36) 00:09:44.295 6049.477 - 6074.683: 6.0920% ( 35) 00:09:44.295 6074.683 - 6099.889: 6.4343% ( 39) 00:09:44.295 6099.889 - 6125.095: 6.8030% ( 42) 00:09:44.295 6125.095 - 6150.302: 7.1541% ( 40) 00:09:44.295 6150.302 - 6175.508: 7.5053% ( 40) 00:09:44.295 6175.508 - 6200.714: 7.8476% ( 39) 00:09:44.295 6200.714 - 6225.920: 8.1987% ( 40) 00:09:44.295 6225.920 - 6251.126: 8.5235% ( 37) 00:09:44.295 6251.126 - 6276.332: 8.8834% ( 41) 00:09:44.295 6276.332 - 6301.538: 9.2346% ( 40) 00:09:44.295 6301.538 - 6326.745: 9.5418% ( 35) 00:09:44.295 6326.745 - 6351.951: 9.8929% ( 40) 00:09:44.295 6351.951 - 6377.157: 10.2265% ( 38) 00:09:44.295 6377.157 - 6402.363: 10.5864% ( 41) 00:09:44.295 6402.363 - 6427.569: 10.9024% ( 36) 00:09:44.295 6427.569 - 6452.775: 11.2623% ( 41) 00:09:44.295 6452.775 - 6503.188: 11.9733% ( 81) 00:09:44.295 6503.188 - 6553.600: 12.6492% ( 77) 00:09:44.295 6553.600 - 6604.012: 13.3515% ( 80) 00:09:44.295 6604.012 - 6654.425: 14.0274% ( 77) 00:09:44.295 6654.425 - 6704.837: 14.7911% ( 87) 00:09:44.295 6704.837 - 6755.249: 15.5021% ( 81) 00:09:44.295 6755.249 - 6805.662: 16.2044% ( 80) 00:09:44.295 6805.662 - 6856.074: 16.9329% ( 83) 00:09:44.295 6856.074 - 6906.486: 17.6440% ( 81) 00:09:44.295 6906.486 - 6956.898: 18.3638% ( 82) 00:09:44.295 6956.898 - 7007.311: 19.1011% ( 84) 00:09:44.295 7007.311 - 7057.723: 19.8034% ( 80) 00:09:44.295 7057.723 - 7108.135: 20.5320% ( 83) 00:09:44.295 7108.135 - 7158.548: 21.2430% ( 81) 00:09:44.295 7158.548 - 7208.960: 21.9891% ( 85) 00:09:44.295 7208.960 - 7259.372: 22.6738% ( 78) 00:09:44.295 7259.372 - 7309.785: 23.4112% ( 84) 00:09:44.295 7309.785 - 7360.197: 24.1485% ( 84) 00:09:44.295 7360.197 - 7410.609: 24.8859% ( 84) 00:09:44.295 7410.609 - 7461.022: 25.4301% ( 62) 00:09:44.295 7461.022 - 7511.434: 25.7022% ( 31) 00:09:44.295 7511.434 - 7561.846: 25.8603% ( 18) 00:09:44.295 7561.846 - 7612.258: 25.9656% ( 12) 00:09:44.295 7612.258 - 7662.671: 26.0709% ( 12) 00:09:44.295 7662.671 - 7713.083: 26.1938% ( 14) 00:09:44.295 7713.083 - 7763.495: 26.2992% ( 12) 00:09:44.295 7763.495 - 7813.908: 26.4133% ( 13) 00:09:44.295 7813.908 - 7864.320: 26.5274% ( 13) 00:09:44.295 7864.320 - 7914.732: 26.6415% ( 13) 00:09:44.295 7914.732 - 7965.145: 26.7468% ( 12) 00:09:44.295 7965.145 - 8015.557: 26.8697% ( 14) 00:09:44.295 8015.557 - 8065.969: 26.9487% ( 9) 00:09:44.295 8065.969 - 8116.382: 27.0365% ( 10) 00:09:44.295 8116.382 - 8166.794: 27.1243% ( 10) 00:09:44.295 8166.794 - 8217.206: 27.2033% ( 9) 00:09:44.295 8217.206 - 8267.618: 27.2823% ( 9) 00:09:44.295 8267.618 - 8318.031: 27.3876% ( 12) 00:09:44.295 8318.031 - 8368.443: 27.4842% ( 11) 00:09:44.295 8368.443 - 8418.855: 27.6071% ( 14) 00:09:44.295 8418.855 - 8469.268: 27.7124% ( 12) 00:09:44.295 8469.268 - 8519.680: 27.7827% ( 8) 00:09:44.295 8519.680 - 8570.092: 27.8441% ( 7) 00:09:44.295 8570.092 - 8620.505: 27.9055% ( 7) 00:09:44.295 8620.505 - 8670.917: 27.9933% ( 10) 00:09:44.295 8670.917 - 8721.329: 28.0723% ( 9) 00:09:44.295 8721.329 - 8771.742: 28.1689% ( 11) 00:09:44.295 8771.742 - 8822.154: 28.2742% ( 12) 00:09:44.295 8822.154 - 8872.566: 28.3971% ( 14) 00:09:44.295 8872.566 - 8922.978: 28.5112% ( 13) 00:09:44.295 8922.978 - 8973.391: 28.6166% ( 12) 00:09:44.295 8973.391 - 9023.803: 28.7307% ( 13) 00:09:44.295 9023.803 - 9074.215: 28.8975% ( 19) 00:09:44.295 9074.215 - 9124.628: 29.0555% ( 18) 00:09:44.295 9124.628 - 9175.040: 29.2574% ( 23) 00:09:44.295 9175.040 - 9225.452: 29.4505% ( 22) 00:09:44.295 9225.452 - 9275.865: 29.6524% ( 23) 00:09:44.295 9275.865 - 9326.277: 29.8279% ( 20) 00:09:44.295 9326.277 - 9376.689: 30.0211% ( 22) 00:09:44.295 9376.689 - 9427.102: 30.2405% ( 25) 00:09:44.295 9427.102 - 9477.514: 30.4863% ( 28) 00:09:44.295 9477.514 - 9527.926: 30.7321% ( 28) 00:09:44.295 9527.926 - 9578.338: 31.0305% ( 34) 00:09:44.295 9578.338 - 9628.751: 31.4607% ( 49) 00:09:44.295 9628.751 - 9679.163: 31.9084% ( 51) 00:09:44.295 9679.163 - 9729.575: 32.3736% ( 53) 00:09:44.295 9729.575 - 9779.988: 32.8213% ( 51) 00:09:44.295 9779.988 - 9830.400: 33.2953% ( 54) 00:09:44.295 9830.400 - 9880.812: 33.7693% ( 54) 00:09:44.295 9880.812 - 9931.225: 34.2784% ( 58) 00:09:44.295 9931.225 - 9981.637: 34.8315% ( 63) 00:09:44.295 9981.637 - 10032.049: 35.3494% ( 59) 00:09:44.295 10032.049 - 10082.462: 35.8936% ( 62) 00:09:44.295 10082.462 - 10132.874: 36.4466% ( 63) 00:09:44.295 10132.874 - 10183.286: 36.9909% ( 62) 00:09:44.295 10183.286 - 10233.698: 37.5351% ( 62) 00:09:44.295 10233.698 - 10284.111: 38.1057% ( 65) 00:09:44.295 10284.111 - 10334.523: 38.6324% ( 60) 00:09:44.295 10334.523 - 10384.935: 39.2205% ( 67) 00:09:44.295 10384.935 - 10435.348: 39.8350% ( 70) 00:09:44.295 10435.348 - 10485.760: 40.4670% ( 72) 00:09:44.295 10485.760 - 10536.172: 41.0990% ( 72) 00:09:44.295 10536.172 - 10586.585: 41.7574% ( 75) 00:09:44.295 10586.585 - 10636.997: 42.4157% ( 75) 00:09:44.295 10636.997 - 10687.409: 43.0214% ( 69) 00:09:44.295 10687.409 - 10737.822: 43.6622% ( 73) 00:09:44.295 10737.822 - 10788.234: 44.2504% ( 67) 00:09:44.295 10788.234 - 10838.646: 44.8824% ( 72) 00:09:44.295 10838.646 - 10889.058: 45.4793% ( 68) 00:09:44.295 10889.058 - 10939.471: 46.1025% ( 71) 00:09:44.295 10939.471 - 10989.883: 46.7609% ( 75) 00:09:44.295 10989.883 - 11040.295: 47.3929% ( 72) 00:09:44.295 11040.295 - 11090.708: 48.0074% ( 70) 00:09:44.295 11090.708 - 11141.120: 48.6218% ( 70) 00:09:44.295 11141.120 - 11191.532: 49.2363% ( 70) 00:09:44.295 11191.532 - 11241.945: 49.8420% ( 69) 00:09:44.295 11241.945 - 11292.357: 50.4477% ( 69) 00:09:44.295 11292.357 - 11342.769: 51.0446% ( 68) 00:09:44.295 11342.769 - 11393.182: 51.6415% ( 68) 00:09:44.295 11393.182 - 11443.594: 52.2209% ( 66) 00:09:44.295 11443.594 - 11494.006: 52.8002% ( 66) 00:09:44.295 11494.006 - 11544.418: 53.4235% ( 71) 00:09:44.295 11544.418 - 11594.831: 53.9853% ( 64) 00:09:44.295 11594.831 - 11645.243: 54.5471% ( 64) 00:09:44.295 11645.243 - 11695.655: 55.0298% ( 55) 00:09:44.295 11695.655 - 11746.068: 55.5478% ( 59) 00:09:44.295 11746.068 - 11796.480: 56.0305% ( 55) 00:09:44.295 11796.480 - 11846.892: 56.5133% ( 55) 00:09:44.295 11846.892 - 11897.305: 56.9698% ( 52) 00:09:44.295 11897.305 - 11947.717: 57.4175% ( 51) 00:09:44.295 11947.717 - 11998.129: 57.9178% ( 57) 00:09:44.295 11998.129 - 12048.542: 58.3392% ( 48) 00:09:44.295 12048.542 - 12098.954: 58.7781% ( 50) 00:09:44.295 12098.954 - 12149.366: 59.2082% ( 49) 00:09:44.295 12149.366 - 12199.778: 59.6822% ( 54) 00:09:44.295 12199.778 - 12250.191: 60.1738% ( 56) 00:09:44.295 12250.191 - 12300.603: 60.5952% ( 48) 00:09:44.295 12300.603 - 12351.015: 60.9902% ( 45) 00:09:44.295 12351.015 - 12401.428: 61.4027% ( 47) 00:09:44.295 12401.428 - 12451.840: 61.8680% ( 53) 00:09:44.295 12451.840 - 12502.252: 62.2981% ( 49) 00:09:44.295 12502.252 - 12552.665: 62.7370% ( 50) 00:09:44.295 12552.665 - 12603.077: 63.1671% ( 49) 00:09:44.295 12603.077 - 12653.489: 63.5885% ( 48) 00:09:44.295 12653.489 - 12703.902: 64.0098% ( 48) 00:09:44.295 12703.902 - 12754.314: 64.4487% ( 50) 00:09:44.295 12754.314 - 12804.726: 64.9140% ( 53) 00:09:44.295 12804.726 - 12855.138: 65.5021% ( 67) 00:09:44.295 12855.138 - 12905.551: 66.0639% ( 64) 00:09:44.295 12905.551 - 13006.375: 67.0558% ( 113) 00:09:44.295 13006.375 - 13107.200: 68.1180% ( 121) 00:09:44.296 13107.200 - 13208.025: 69.1538% ( 118) 00:09:44.296 13208.025 - 13308.849: 70.2511% ( 125) 00:09:44.296 13308.849 - 13409.674: 71.2781% ( 117) 00:09:44.296 13409.674 - 13510.498: 72.4544% ( 134) 00:09:44.296 13510.498 - 13611.323: 73.6306% ( 134) 00:09:44.296 13611.323 - 13712.148: 74.6928% ( 121) 00:09:44.296 13712.148 - 13812.972: 75.9129% ( 139) 00:09:44.296 13812.972 - 13913.797: 77.0804% ( 133) 00:09:44.296 13913.797 - 14014.622: 78.4059% ( 151) 00:09:44.296 14014.622 - 14115.446: 79.6963% ( 147) 00:09:44.296 14115.446 - 14216.271: 80.9603% ( 144) 00:09:44.296 14216.271 - 14317.095: 82.1103% ( 131) 00:09:44.296 14317.095 - 14417.920: 83.1900% ( 123) 00:09:44.296 14417.920 - 14518.745: 84.2521% ( 121) 00:09:44.296 14518.745 - 14619.569: 85.3581% ( 126) 00:09:44.296 14619.569 - 14720.394: 86.3940% ( 118) 00:09:44.296 14720.394 - 14821.218: 87.4122% ( 116) 00:09:44.296 14821.218 - 14922.043: 88.3954% ( 112) 00:09:44.296 14922.043 - 15022.868: 89.3610% ( 110) 00:09:44.296 15022.868 - 15123.692: 90.2212% ( 98) 00:09:44.296 15123.692 - 15224.517: 90.9059% ( 78) 00:09:44.296 15224.517 - 15325.342: 91.5730% ( 76) 00:09:44.296 15325.342 - 15426.166: 92.2051% ( 72) 00:09:44.296 15426.166 - 15526.991: 92.7844% ( 66) 00:09:44.296 15526.991 - 15627.815: 93.2935% ( 58) 00:09:44.296 15627.815 - 15728.640: 93.8114% ( 59) 00:09:44.296 15728.640 - 15829.465: 94.3030% ( 56) 00:09:44.296 15829.465 - 15930.289: 94.7156% ( 47) 00:09:44.296 15930.289 - 16031.114: 95.0930% ( 43) 00:09:44.296 16031.114 - 16131.938: 95.4617% ( 42) 00:09:44.296 16131.938 - 16232.763: 95.8041% ( 39) 00:09:44.296 16232.763 - 16333.588: 96.0586% ( 29) 00:09:44.296 16333.588 - 16434.412: 96.2430% ( 21) 00:09:44.296 16434.412 - 16535.237: 96.3834% ( 16) 00:09:44.296 16535.237 - 16636.062: 96.5327% ( 17) 00:09:44.296 16636.062 - 16736.886: 96.6555% ( 14) 00:09:44.296 16736.886 - 16837.711: 96.7784% ( 14) 00:09:44.296 16837.711 - 16938.535: 96.8750% ( 11) 00:09:44.296 16938.535 - 17039.360: 96.9452% ( 8) 00:09:44.296 17039.360 - 17140.185: 97.0154% ( 8) 00:09:44.296 17140.185 - 17241.009: 97.0945% ( 9) 00:09:44.296 17241.009 - 17341.834: 97.1822% ( 10) 00:09:44.296 17341.834 - 17442.658: 97.3227% ( 16) 00:09:44.296 17442.658 - 17543.483: 97.4368% ( 13) 00:09:44.296 17543.483 - 17644.308: 97.5246% ( 10) 00:09:44.296 17644.308 - 17745.132: 97.6211% ( 11) 00:09:44.296 17745.132 - 17845.957: 97.7089% ( 10) 00:09:44.296 17845.957 - 17946.782: 97.7967% ( 10) 00:09:44.296 17946.782 - 18047.606: 97.8845% ( 10) 00:09:44.296 18047.606 - 18148.431: 97.9723% ( 10) 00:09:44.296 18148.431 - 18249.255: 98.0688% ( 11) 00:09:44.296 18249.255 - 18350.080: 98.1566% ( 10) 00:09:44.296 18350.080 - 18450.905: 98.2444% ( 10) 00:09:44.296 18450.905 - 18551.729: 98.3322% ( 10) 00:09:44.296 18551.729 - 18652.554: 98.3761% ( 5) 00:09:44.296 18652.554 - 18753.378: 98.4287% ( 6) 00:09:44.296 18753.378 - 18854.203: 98.4726% ( 5) 00:09:44.296 18854.203 - 18955.028: 98.5253% ( 6) 00:09:44.296 18955.028 - 19055.852: 98.5692% ( 5) 00:09:44.296 19055.852 - 19156.677: 98.6218% ( 6) 00:09:44.296 19156.677 - 19257.502: 98.6657% ( 5) 00:09:44.296 19257.502 - 19358.326: 98.7096% ( 5) 00:09:44.296 19358.326 - 19459.151: 98.7535% ( 5) 00:09:44.296 19459.151 - 19559.975: 98.7886% ( 4) 00:09:44.296 19559.975 - 19660.800: 98.8413% ( 6) 00:09:44.296 19660.800 - 19761.625: 98.8764% ( 4) 00:09:44.296 32263.877 - 32465.526: 98.9115% ( 4) 00:09:44.296 32465.526 - 32667.175: 98.9466% ( 4) 00:09:44.296 32667.175 - 32868.825: 98.9905% ( 5) 00:09:44.296 32868.825 - 33070.474: 99.0344% ( 5) 00:09:44.296 33070.474 - 33272.123: 99.0695% ( 4) 00:09:44.296 33272.123 - 33473.772: 99.1134% ( 5) 00:09:44.296 33473.772 - 33675.422: 99.1485% ( 4) 00:09:44.296 33675.422 - 33877.071: 99.1836% ( 4) 00:09:44.296 33877.071 - 34078.720: 99.2188% ( 4) 00:09:44.296 34078.720 - 34280.369: 99.2539% ( 4) 00:09:44.296 34280.369 - 34482.018: 99.2978% ( 5) 00:09:44.296 34482.018 - 34683.668: 99.3416% ( 5) 00:09:44.296 34683.668 - 34885.317: 99.3768% ( 4) 00:09:44.296 34885.317 - 35086.966: 99.4206% ( 5) 00:09:44.296 35086.966 - 35288.615: 99.4645% ( 5) 00:09:44.296 35288.615 - 35490.265: 99.5084% ( 5) 00:09:44.296 35490.265 - 35691.914: 99.5435% ( 4) 00:09:44.296 35691.914 - 35893.563: 99.5787% ( 4) 00:09:44.296 35893.563 - 36095.212: 99.6138% ( 4) 00:09:44.296 36095.212 - 36296.862: 99.6577% ( 5) 00:09:44.296 36296.862 - 36498.511: 99.7015% ( 5) 00:09:44.296 36498.511 - 36700.160: 99.7367% ( 4) 00:09:44.296 36700.160 - 36901.809: 99.7805% ( 5) 00:09:44.296 36901.809 - 37103.458: 99.8244% ( 5) 00:09:44.296 37103.458 - 37305.108: 99.8683% ( 5) 00:09:44.296 37305.108 - 37506.757: 99.9034% ( 4) 00:09:44.296 37506.757 - 37708.406: 99.9473% ( 5) 00:09:44.296 37708.406 - 37910.055: 99.9824% ( 4) 00:09:44.296 37910.055 - 38111.705: 100.0000% ( 2) 00:09:44.296 00:09:44.296 Latency histogram for PCIE (0000:00:09.0) NSID 1 from core 0: 00:09:44.296 ============================================================================== 00:09:44.296 Range in us Cumulative IO count 00:09:44.296 5520.148 - 5545.354: 0.0088% ( 1) 00:09:44.296 5545.354 - 5570.560: 0.0702% ( 7) 00:09:44.296 5570.560 - 5595.766: 0.2721% ( 23) 00:09:44.296 5595.766 - 5620.972: 0.4916% ( 25) 00:09:44.296 5620.972 - 5646.178: 0.7637% ( 31) 00:09:44.296 5646.178 - 5671.385: 0.9919% ( 26) 00:09:44.296 5671.385 - 5696.591: 1.3343% ( 39) 00:09:44.296 5696.591 - 5721.797: 1.6503% ( 36) 00:09:44.296 5721.797 - 5747.003: 1.9751% ( 37) 00:09:44.296 5747.003 - 5772.209: 2.3086% ( 38) 00:09:44.296 5772.209 - 5797.415: 2.6510% ( 39) 00:09:44.296 5797.415 - 5822.622: 2.9846% ( 38) 00:09:44.296 5822.622 - 5847.828: 3.3006% ( 36) 00:09:44.296 5847.828 - 5873.034: 3.6166% ( 36) 00:09:44.296 5873.034 - 5898.240: 3.9765% ( 41) 00:09:44.296 5898.240 - 5923.446: 4.2662% ( 33) 00:09:44.296 5923.446 - 5948.652: 4.6085% ( 39) 00:09:44.296 5948.652 - 5973.858: 4.9245% ( 36) 00:09:44.296 5973.858 - 5999.065: 5.3020% ( 43) 00:09:44.296 5999.065 - 6024.271: 5.6180% ( 36) 00:09:44.296 6024.271 - 6049.477: 5.9779% ( 41) 00:09:44.296 6049.477 - 6074.683: 6.2939% ( 36) 00:09:44.296 6074.683 - 6099.889: 6.6626% ( 42) 00:09:44.296 6099.889 - 6125.095: 7.0049% ( 39) 00:09:44.296 6125.095 - 6150.302: 7.3473% ( 39) 00:09:44.296 6150.302 - 6175.508: 7.6808% ( 38) 00:09:44.296 6175.508 - 6200.714: 8.0671% ( 44) 00:09:44.296 6200.714 - 6225.920: 8.3831% ( 36) 00:09:44.296 6225.920 - 6251.126: 8.7518% ( 42) 00:09:44.296 6251.126 - 6276.332: 9.0765% ( 37) 00:09:44.296 6276.332 - 6301.538: 9.4452% ( 42) 00:09:44.296 6301.538 - 6326.745: 9.7788% ( 38) 00:09:44.296 6326.745 - 6351.951: 10.1475% ( 42) 00:09:44.296 6351.951 - 6377.157: 10.4810% ( 38) 00:09:44.296 6377.157 - 6402.363: 10.8234% ( 39) 00:09:44.296 6402.363 - 6427.569: 11.1570% ( 38) 00:09:44.296 6427.569 - 6452.775: 11.4905% ( 38) 00:09:44.296 6452.775 - 6503.188: 12.2015% ( 81) 00:09:44.296 6503.188 - 6553.600: 12.8599% ( 75) 00:09:44.296 6553.600 - 6604.012: 13.5183% ( 75) 00:09:44.296 6604.012 - 6654.425: 14.2381% ( 82) 00:09:44.296 6654.425 - 6704.837: 14.9754% ( 84) 00:09:44.296 6704.837 - 6755.249: 15.7040% ( 83) 00:09:44.296 6755.249 - 6805.662: 16.4326% ( 83) 00:09:44.296 6805.662 - 6856.074: 17.2138% ( 89) 00:09:44.296 6856.074 - 6906.486: 17.9336% ( 82) 00:09:44.296 6906.486 - 6956.898: 18.6798% ( 85) 00:09:44.296 6956.898 - 7007.311: 19.4435% ( 87) 00:09:44.296 7007.311 - 7057.723: 20.1896% ( 85) 00:09:44.296 7057.723 - 7108.135: 20.9445% ( 86) 00:09:44.296 7108.135 - 7158.548: 21.6994% ( 86) 00:09:44.296 7158.548 - 7208.960: 22.4456% ( 85) 00:09:44.296 7208.960 - 7259.372: 23.1917% ( 85) 00:09:44.296 7259.372 - 7309.785: 23.9466% ( 86) 00:09:44.296 7309.785 - 7360.197: 24.7015% ( 86) 00:09:44.296 7360.197 - 7410.609: 25.4038% ( 80) 00:09:44.296 7410.609 - 7461.022: 25.8690% ( 53) 00:09:44.296 7461.022 - 7511.434: 26.1236% ( 29) 00:09:44.296 7511.434 - 7561.846: 26.2816% ( 18) 00:09:44.296 7561.846 - 7612.258: 26.3957% ( 13) 00:09:44.296 7612.258 - 7662.671: 26.5011% ( 12) 00:09:44.296 7662.671 - 7713.083: 26.6064% ( 12) 00:09:44.296 7713.083 - 7763.495: 26.7381% ( 15) 00:09:44.296 7763.495 - 7813.908: 26.8434% ( 12) 00:09:44.296 7813.908 - 7864.320: 26.9487% ( 12) 00:09:44.296 7864.320 - 7914.732: 27.0716% ( 14) 00:09:44.296 7914.732 - 7965.145: 27.1594% ( 10) 00:09:44.296 7965.145 - 8015.557: 27.2296% ( 8) 00:09:44.296 8015.557 - 8065.969: 27.3174% ( 10) 00:09:44.296 8065.969 - 8116.382: 27.4052% ( 10) 00:09:44.296 8116.382 - 8166.794: 27.4842% ( 9) 00:09:44.296 8166.794 - 8217.206: 27.6071% ( 14) 00:09:44.296 8217.206 - 8267.618: 27.7124% ( 12) 00:09:44.296 8267.618 - 8318.031: 27.8265% ( 13) 00:09:44.296 8318.031 - 8368.443: 27.9319% ( 12) 00:09:44.296 8368.443 - 8418.855: 28.0811% ( 17) 00:09:44.296 8418.855 - 8469.268: 28.1952% ( 13) 00:09:44.296 8469.268 - 8519.680: 28.3269% ( 15) 00:09:44.296 8519.680 - 8570.092: 28.4147% ( 10) 00:09:44.296 8570.092 - 8620.505: 28.4937% ( 9) 00:09:44.296 8620.505 - 8670.917: 28.5376% ( 5) 00:09:44.296 8670.917 - 8721.329: 28.5990% ( 7) 00:09:44.296 8721.329 - 8771.742: 28.7131% ( 13) 00:09:44.296 8771.742 - 8822.154: 28.8185% ( 12) 00:09:44.296 8822.154 - 8872.566: 28.9326% ( 13) 00:09:44.296 8872.566 - 8922.978: 29.0379% ( 12) 00:09:44.296 8922.978 - 8973.391: 29.1608% ( 14) 00:09:44.296 8973.391 - 9023.803: 29.2749% ( 13) 00:09:44.296 9023.803 - 9074.215: 29.3890% ( 13) 00:09:44.297 9074.215 - 9124.628: 29.4944% ( 12) 00:09:44.297 9124.628 - 9175.040: 29.6261% ( 15) 00:09:44.297 9175.040 - 9225.452: 29.7665% ( 16) 00:09:44.297 9225.452 - 9275.865: 29.9070% ( 16) 00:09:44.297 9275.865 - 9326.277: 30.0913% ( 21) 00:09:44.297 9326.277 - 9376.689: 30.3371% ( 28) 00:09:44.297 9376.689 - 9427.102: 30.5126% ( 20) 00:09:44.297 9427.102 - 9477.514: 30.7233% ( 24) 00:09:44.297 9477.514 - 9527.926: 30.9603% ( 27) 00:09:44.297 9527.926 - 9578.338: 31.2149% ( 29) 00:09:44.297 9578.338 - 9628.751: 31.5309% ( 36) 00:09:44.297 9628.751 - 9679.163: 31.9171% ( 44) 00:09:44.297 9679.163 - 9729.575: 32.3034% ( 44) 00:09:44.297 9729.575 - 9779.988: 32.7774% ( 54) 00:09:44.297 9779.988 - 9830.400: 33.2426% ( 53) 00:09:44.297 9830.400 - 9880.812: 33.7342% ( 56) 00:09:44.297 9880.812 - 9931.225: 34.2784% ( 62) 00:09:44.297 9931.225 - 9981.637: 34.8051% ( 60) 00:09:44.297 9981.637 - 10032.049: 35.4108% ( 69) 00:09:44.297 10032.049 - 10082.462: 36.0165% ( 69) 00:09:44.297 10082.462 - 10132.874: 36.5344% ( 59) 00:09:44.297 10132.874 - 10183.286: 37.1313% ( 68) 00:09:44.297 10183.286 - 10233.698: 37.7633% ( 72) 00:09:44.297 10233.698 - 10284.111: 38.2812% ( 59) 00:09:44.297 10284.111 - 10334.523: 38.9045% ( 71) 00:09:44.297 10334.523 - 10384.935: 39.5277% ( 71) 00:09:44.297 10384.935 - 10435.348: 40.1773% ( 74) 00:09:44.297 10435.348 - 10485.760: 40.8357% ( 75) 00:09:44.297 10485.760 - 10536.172: 41.4501% ( 70) 00:09:44.297 10536.172 - 10586.585: 42.1085% ( 75) 00:09:44.297 10586.585 - 10636.997: 42.7230% ( 70) 00:09:44.297 10636.997 - 10687.409: 43.3638% ( 73) 00:09:44.297 10687.409 - 10737.822: 44.0221% ( 75) 00:09:44.297 10737.822 - 10788.234: 44.6103% ( 67) 00:09:44.297 10788.234 - 10838.646: 45.2072% ( 68) 00:09:44.297 10838.646 - 10889.058: 45.8392% ( 72) 00:09:44.297 10889.058 - 10939.471: 46.4273% ( 67) 00:09:44.297 10939.471 - 10989.883: 47.0769% ( 74) 00:09:44.297 10989.883 - 11040.295: 47.6738% ( 68) 00:09:44.297 11040.295 - 11090.708: 48.2971% ( 71) 00:09:44.297 11090.708 - 11141.120: 48.8852% ( 67) 00:09:44.297 11141.120 - 11191.532: 49.5435% ( 75) 00:09:44.297 11191.532 - 11241.945: 50.1843% ( 73) 00:09:44.297 11241.945 - 11292.357: 50.7900% ( 69) 00:09:44.297 11292.357 - 11342.769: 51.4572% ( 76) 00:09:44.297 11342.769 - 11393.182: 52.0629% ( 69) 00:09:44.297 11393.182 - 11443.594: 52.6773% ( 70) 00:09:44.297 11443.594 - 11494.006: 53.2654% ( 67) 00:09:44.297 11494.006 - 11544.418: 53.8536% ( 67) 00:09:44.297 11544.418 - 11594.831: 54.4505% ( 68) 00:09:44.297 11594.831 - 11645.243: 55.0298% ( 66) 00:09:44.297 11645.243 - 11695.655: 55.6268% ( 68) 00:09:44.297 11695.655 - 11746.068: 56.1710% ( 62) 00:09:44.297 11746.068 - 11796.480: 56.7240% ( 63) 00:09:44.297 11796.480 - 11846.892: 57.2331% ( 58) 00:09:44.297 11846.892 - 11897.305: 57.7335% ( 57) 00:09:44.297 11897.305 - 11947.717: 58.2338% ( 57) 00:09:44.297 11947.717 - 11998.129: 58.6903% ( 52) 00:09:44.297 11998.129 - 12048.542: 59.0765% ( 44) 00:09:44.297 12048.542 - 12098.954: 59.4716% ( 45) 00:09:44.297 12098.954 - 12149.366: 59.8402% ( 42) 00:09:44.297 12149.366 - 12199.778: 60.1914% ( 40) 00:09:44.297 12199.778 - 12250.191: 60.5513% ( 41) 00:09:44.297 12250.191 - 12300.603: 60.9375% ( 44) 00:09:44.297 12300.603 - 12351.015: 61.2974% ( 41) 00:09:44.297 12351.015 - 12401.428: 61.6397% ( 39) 00:09:44.297 12401.428 - 12451.840: 62.0787% ( 50) 00:09:44.297 12451.840 - 12502.252: 62.4649% ( 44) 00:09:44.297 12502.252 - 12552.665: 62.8511% ( 44) 00:09:44.297 12552.665 - 12603.077: 63.2461% ( 45) 00:09:44.297 12603.077 - 12653.489: 63.6060% ( 41) 00:09:44.297 12653.489 - 12703.902: 64.0274% ( 48) 00:09:44.297 12703.902 - 12754.314: 64.3873% ( 41) 00:09:44.297 12754.314 - 12804.726: 64.7999% ( 47) 00:09:44.297 12804.726 - 12855.138: 65.3002% ( 57) 00:09:44.297 12855.138 - 12905.551: 65.7567% ( 52) 00:09:44.297 12905.551 - 13006.375: 66.9154% ( 132) 00:09:44.297 13006.375 - 13107.200: 68.0214% ( 126) 00:09:44.297 13107.200 - 13208.025: 69.2942% ( 145) 00:09:44.297 13208.025 - 13308.849: 70.5320% ( 141) 00:09:44.297 13308.849 - 13409.674: 71.7433% ( 138) 00:09:44.297 13409.674 - 13510.498: 73.0513% ( 149) 00:09:44.297 13510.498 - 13611.323: 74.3241% ( 145) 00:09:44.297 13611.323 - 13712.148: 75.5179% ( 136) 00:09:44.297 13712.148 - 13812.972: 76.6854% ( 133) 00:09:44.297 13812.972 - 13913.797: 77.8529% ( 133) 00:09:44.297 13913.797 - 14014.622: 78.9677% ( 127) 00:09:44.297 14014.622 - 14115.446: 80.1001% ( 129) 00:09:44.297 14115.446 - 14216.271: 81.2324% ( 129) 00:09:44.297 14216.271 - 14317.095: 82.3824% ( 131) 00:09:44.297 14317.095 - 14417.920: 83.5411% ( 132) 00:09:44.297 14417.920 - 14518.745: 84.6822% ( 130) 00:09:44.297 14518.745 - 14619.569: 85.7707% ( 124) 00:09:44.297 14619.569 - 14720.394: 86.8504% ( 123) 00:09:44.297 14720.394 - 14821.218: 87.8423% ( 113) 00:09:44.297 14821.218 - 14922.043: 88.8518% ( 115) 00:09:44.297 14922.043 - 15022.868: 89.7560% ( 103) 00:09:44.297 15022.868 - 15123.692: 90.5109% ( 86) 00:09:44.297 15123.692 - 15224.517: 91.2482% ( 84) 00:09:44.297 15224.517 - 15325.342: 91.9505% ( 80) 00:09:44.297 15325.342 - 15426.166: 92.6001% ( 74) 00:09:44.297 15426.166 - 15526.991: 93.1531% ( 63) 00:09:44.297 15526.991 - 15627.815: 93.6096% ( 52) 00:09:44.297 15627.815 - 15728.640: 93.9958% ( 44) 00:09:44.297 15728.640 - 15829.465: 94.2416% ( 28) 00:09:44.297 15829.465 - 15930.289: 94.5049% ( 30) 00:09:44.297 15930.289 - 16031.114: 94.7244% ( 25) 00:09:44.297 16031.114 - 16131.938: 94.9350% ( 24) 00:09:44.297 16131.938 - 16232.763: 95.1369% ( 23) 00:09:44.297 16232.763 - 16333.588: 95.3125% ( 20) 00:09:44.297 16333.588 - 16434.412: 95.5495% ( 27) 00:09:44.297 16434.412 - 16535.237: 95.7602% ( 24) 00:09:44.297 16535.237 - 16636.062: 95.9884% ( 26) 00:09:44.297 16636.062 - 16736.886: 96.1903% ( 23) 00:09:44.297 16736.886 - 16837.711: 96.3571% ( 19) 00:09:44.297 16837.711 - 16938.535: 96.5063% ( 17) 00:09:44.297 16938.535 - 17039.360: 96.6731% ( 19) 00:09:44.297 17039.360 - 17140.185: 96.8048% ( 15) 00:09:44.297 17140.185 - 17241.009: 96.9101% ( 12) 00:09:44.297 17241.009 - 17341.834: 97.0330% ( 14) 00:09:44.297 17341.834 - 17442.658: 97.1471% ( 13) 00:09:44.297 17442.658 - 17543.483: 97.2349% ( 10) 00:09:44.297 17543.483 - 17644.308: 97.3227% ( 10) 00:09:44.297 17644.308 - 17745.132: 97.4105% ( 10) 00:09:44.297 17745.132 - 17845.957: 97.4982% ( 10) 00:09:44.297 17845.957 - 17946.782: 97.5948% ( 11) 00:09:44.297 17946.782 - 18047.606: 97.6738% ( 9) 00:09:44.297 18047.606 - 18148.431: 97.7704% ( 11) 00:09:44.297 18148.431 - 18249.255: 97.8581% ( 10) 00:09:44.297 18249.255 - 18350.080: 97.9459% ( 10) 00:09:44.297 18350.080 - 18450.905: 98.0337% ( 10) 00:09:44.297 18450.905 - 18551.729: 98.1303% ( 11) 00:09:44.297 18551.729 - 18652.554: 98.2180% ( 10) 00:09:44.297 18652.554 - 18753.378: 98.3058% ( 10) 00:09:44.297 18753.378 - 18854.203: 98.3848% ( 9) 00:09:44.297 18854.203 - 18955.028: 98.4463% ( 7) 00:09:44.297 18955.028 - 19055.852: 98.4902% ( 5) 00:09:44.297 19055.852 - 19156.677: 98.5428% ( 6) 00:09:44.297 19156.677 - 19257.502: 98.5867% ( 5) 00:09:44.297 19257.502 - 19358.326: 98.6306% ( 5) 00:09:44.297 19358.326 - 19459.151: 98.6745% ( 5) 00:09:44.297 19459.151 - 19559.975: 98.7272% ( 6) 00:09:44.297 19559.975 - 19660.800: 98.7535% ( 3) 00:09:44.297 19660.800 - 19761.625: 98.7886% ( 4) 00:09:44.297 19761.625 - 19862.449: 98.8150% ( 3) 00:09:44.297 19862.449 - 19963.274: 98.8413% ( 3) 00:09:44.297 19963.274 - 20064.098: 98.8764% ( 4) 00:09:44.297 33473.772 - 33675.422: 98.9027% ( 3) 00:09:44.297 33675.422 - 33877.071: 98.9466% ( 5) 00:09:44.297 33877.071 - 34078.720: 98.9905% ( 5) 00:09:44.297 34078.720 - 34280.369: 99.0520% ( 7) 00:09:44.297 34280.369 - 34482.018: 99.1046% ( 6) 00:09:44.297 34482.018 - 34683.668: 99.1573% ( 6) 00:09:44.297 34683.668 - 34885.317: 99.2100% ( 6) 00:09:44.297 34885.317 - 35086.966: 99.2626% ( 6) 00:09:44.297 35086.966 - 35288.615: 99.3153% ( 6) 00:09:44.297 35288.615 - 35490.265: 99.3680% ( 6) 00:09:44.297 35490.265 - 35691.914: 99.4206% ( 6) 00:09:44.297 35691.914 - 35893.563: 99.4821% ( 7) 00:09:44.297 35893.563 - 36095.212: 99.5260% ( 5) 00:09:44.297 36095.212 - 36296.862: 99.5787% ( 6) 00:09:44.297 36296.862 - 36498.511: 99.6313% ( 6) 00:09:44.297 36498.511 - 36700.160: 99.6840% ( 6) 00:09:44.297 36700.160 - 36901.809: 99.7454% ( 7) 00:09:44.297 36901.809 - 37103.458: 99.7981% ( 6) 00:09:44.297 37103.458 - 37305.108: 99.8508% ( 6) 00:09:44.297 37305.108 - 37506.757: 99.9034% ( 6) 00:09:44.298 37506.757 - 37708.406: 99.9649% ( 7) 00:09:44.298 37708.406 - 37910.055: 100.0000% ( 4) 00:09:44.298 00:09:44.298 Latency histogram for PCIE (0000:00:08.0) NSID 1 from core 0: 00:09:44.298 ============================================================================== 00:09:44.298 Range in us Cumulative IO count 00:09:44.298 5545.354 - 5570.560: 0.0088% ( 1) 00:09:44.298 5570.560 - 5595.766: 0.0527% ( 5) 00:09:44.298 5595.766 - 5620.972: 0.2282% ( 20) 00:09:44.298 5620.972 - 5646.178: 0.3862% ( 18) 00:09:44.298 5646.178 - 5671.385: 0.6584% ( 31) 00:09:44.298 5671.385 - 5696.591: 1.1148% ( 52) 00:09:44.298 5696.591 - 5721.797: 1.6415% ( 60) 00:09:44.298 5721.797 - 5747.003: 2.1243% ( 55) 00:09:44.298 5747.003 - 5772.209: 2.5808% ( 52) 00:09:44.298 5772.209 - 5797.415: 2.8704% ( 33) 00:09:44.298 5797.415 - 5822.622: 3.1426% ( 31) 00:09:44.298 5822.622 - 5847.828: 3.3883% ( 28) 00:09:44.298 5847.828 - 5873.034: 3.6429% ( 29) 00:09:44.298 5873.034 - 5898.240: 3.9238% ( 32) 00:09:44.298 5898.240 - 5923.446: 4.2398% ( 36) 00:09:44.298 5923.446 - 5948.652: 4.5295% ( 33) 00:09:44.298 5948.652 - 5973.858: 4.8806% ( 40) 00:09:44.298 5973.858 - 5999.065: 5.2054% ( 37) 00:09:44.298 5999.065 - 6024.271: 5.5565% ( 40) 00:09:44.298 6024.271 - 6049.477: 5.8813% ( 37) 00:09:44.298 6049.477 - 6074.683: 6.2149% ( 38) 00:09:44.298 6074.683 - 6099.889: 6.5309% ( 36) 00:09:44.298 6099.889 - 6125.095: 6.8820% ( 40) 00:09:44.298 6125.095 - 6150.302: 7.2156% ( 38) 00:09:44.298 6150.302 - 6175.508: 7.5228% ( 35) 00:09:44.298 6175.508 - 6200.714: 7.8652% ( 39) 00:09:44.298 6200.714 - 6225.920: 8.2514% ( 44) 00:09:44.298 6225.920 - 6251.126: 8.6113% ( 41) 00:09:44.298 6251.126 - 6276.332: 8.9273% ( 36) 00:09:44.298 6276.332 - 6301.538: 9.2784% ( 40) 00:09:44.298 6301.538 - 6326.745: 9.5945% ( 36) 00:09:44.298 6326.745 - 6351.951: 9.9368% ( 39) 00:09:44.298 6351.951 - 6377.157: 10.2967% ( 41) 00:09:44.298 6377.157 - 6402.363: 10.6303% ( 38) 00:09:44.298 6402.363 - 6427.569: 10.9287% ( 34) 00:09:44.298 6427.569 - 6452.775: 11.2447% ( 36) 00:09:44.298 6452.775 - 6503.188: 11.9382% ( 79) 00:09:44.298 6503.188 - 6553.600: 12.6053% ( 76) 00:09:44.298 6553.600 - 6604.012: 13.3866% ( 89) 00:09:44.298 6604.012 - 6654.425: 14.0888% ( 80) 00:09:44.298 6654.425 - 6704.837: 14.7911% ( 80) 00:09:44.298 6704.837 - 6755.249: 15.5109% ( 82) 00:09:44.298 6755.249 - 6805.662: 16.3360% ( 94) 00:09:44.298 6805.662 - 6856.074: 17.0471% ( 81) 00:09:44.298 6856.074 - 6906.486: 17.7756% ( 83) 00:09:44.298 6906.486 - 6956.898: 18.5305% ( 86) 00:09:44.298 6956.898 - 7007.311: 19.3030% ( 88) 00:09:44.298 7007.311 - 7057.723: 20.0140% ( 81) 00:09:44.298 7057.723 - 7108.135: 20.7690% ( 86) 00:09:44.298 7108.135 - 7158.548: 21.5239% ( 86) 00:09:44.298 7158.548 - 7208.960: 22.2788% ( 86) 00:09:44.298 7208.960 - 7259.372: 23.0162% ( 84) 00:09:44.298 7259.372 - 7309.785: 23.7798% ( 87) 00:09:44.298 7309.785 - 7360.197: 24.5435% ( 87) 00:09:44.298 7360.197 - 7410.609: 25.2633% ( 82) 00:09:44.298 7410.609 - 7461.022: 25.7637% ( 57) 00:09:44.298 7461.022 - 7511.434: 26.0095% ( 28) 00:09:44.298 7511.434 - 7561.846: 26.1675% ( 18) 00:09:44.298 7561.846 - 7612.258: 26.2816% ( 13) 00:09:44.298 7612.258 - 7662.671: 26.3869% ( 12) 00:09:44.298 7662.671 - 7713.083: 26.5011% ( 13) 00:09:44.298 7713.083 - 7763.495: 26.6152% ( 13) 00:09:44.298 7763.495 - 7813.908: 26.7205% ( 12) 00:09:44.298 7813.908 - 7864.320: 26.8258% ( 12) 00:09:44.298 7864.320 - 7914.732: 26.8961% ( 8) 00:09:44.298 7914.732 - 7965.145: 26.9926% ( 11) 00:09:44.298 7965.145 - 8015.557: 27.0716% ( 9) 00:09:44.298 8015.557 - 8065.969: 27.1594% ( 10) 00:09:44.298 8065.969 - 8116.382: 27.2296% ( 8) 00:09:44.298 8116.382 - 8166.794: 27.3174% ( 10) 00:09:44.298 8166.794 - 8217.206: 27.3964% ( 9) 00:09:44.298 8217.206 - 8267.618: 27.4842% ( 10) 00:09:44.298 8267.618 - 8318.031: 27.5632% ( 9) 00:09:44.298 8318.031 - 8368.443: 27.6861% ( 14) 00:09:44.298 8368.443 - 8418.855: 27.8002% ( 13) 00:09:44.298 8418.855 - 8469.268: 27.9143% ( 13) 00:09:44.298 8469.268 - 8519.680: 28.0284% ( 13) 00:09:44.298 8519.680 - 8570.092: 28.1162% ( 10) 00:09:44.298 8570.092 - 8620.505: 28.2479% ( 15) 00:09:44.298 8620.505 - 8670.917: 28.3620% ( 13) 00:09:44.298 8670.917 - 8721.329: 28.4586% ( 11) 00:09:44.298 8721.329 - 8771.742: 28.5815% ( 14) 00:09:44.298 8771.742 - 8822.154: 28.7219% ( 16) 00:09:44.298 8822.154 - 8872.566: 28.8624% ( 16) 00:09:44.298 8872.566 - 8922.978: 28.9940% ( 15) 00:09:44.298 8922.978 - 8973.391: 29.1345% ( 16) 00:09:44.298 8973.391 - 9023.803: 29.2837% ( 17) 00:09:44.298 9023.803 - 9074.215: 29.4242% ( 16) 00:09:44.298 9074.215 - 9124.628: 29.5822% ( 18) 00:09:44.298 9124.628 - 9175.040: 29.7577% ( 20) 00:09:44.298 9175.040 - 9225.452: 29.9508% ( 22) 00:09:44.298 9225.452 - 9275.865: 30.1352% ( 21) 00:09:44.298 9275.865 - 9326.277: 30.3810% ( 28) 00:09:44.298 9326.277 - 9376.689: 30.6355% ( 29) 00:09:44.298 9376.689 - 9427.102: 30.8901% ( 29) 00:09:44.298 9427.102 - 9477.514: 31.1271% ( 27) 00:09:44.298 9477.514 - 9527.926: 31.3904% ( 30) 00:09:44.298 9527.926 - 9578.338: 31.6713% ( 32) 00:09:44.298 9578.338 - 9628.751: 31.9874% ( 36) 00:09:44.298 9628.751 - 9679.163: 32.3560% ( 42) 00:09:44.298 9679.163 - 9729.575: 32.6896% ( 38) 00:09:44.298 9729.575 - 9779.988: 33.1285% ( 50) 00:09:44.298 9779.988 - 9830.400: 33.5323% ( 46) 00:09:44.298 9830.400 - 9880.812: 34.0502% ( 59) 00:09:44.298 9880.812 - 9931.225: 34.6559% ( 69) 00:09:44.298 9931.225 - 9981.637: 35.3230% ( 76) 00:09:44.298 9981.637 - 10032.049: 35.9989% ( 77) 00:09:44.298 10032.049 - 10082.462: 36.6134% ( 70) 00:09:44.298 10082.462 - 10132.874: 37.1840% ( 65) 00:09:44.298 10132.874 - 10183.286: 37.8687% ( 78) 00:09:44.298 10183.286 - 10233.698: 38.4480% ( 66) 00:09:44.298 10233.698 - 10284.111: 39.0625% ( 70) 00:09:44.298 10284.111 - 10334.523: 39.6506% ( 67) 00:09:44.298 10334.523 - 10384.935: 40.2300% ( 66) 00:09:44.298 10384.935 - 10435.348: 40.8357% ( 69) 00:09:44.298 10435.348 - 10485.760: 41.4677% ( 72) 00:09:44.298 10485.760 - 10536.172: 42.1875% ( 82) 00:09:44.298 10536.172 - 10586.585: 42.8020% ( 70) 00:09:44.298 10586.585 - 10636.997: 43.4340% ( 72) 00:09:44.298 10636.997 - 10687.409: 44.0572% ( 71) 00:09:44.298 10687.409 - 10737.822: 44.7068% ( 74) 00:09:44.298 10737.822 - 10788.234: 45.3388% ( 72) 00:09:44.298 10788.234 - 10838.646: 45.9357% ( 68) 00:09:44.298 10838.646 - 10889.058: 46.4888% ( 63) 00:09:44.298 10889.058 - 10939.471: 47.0593% ( 65) 00:09:44.298 10939.471 - 10989.883: 47.6211% ( 64) 00:09:44.298 10989.883 - 11040.295: 48.2268% ( 69) 00:09:44.298 11040.295 - 11090.708: 48.8062% ( 66) 00:09:44.298 11090.708 - 11141.120: 49.4206% ( 70) 00:09:44.298 11141.120 - 11191.532: 50.0176% ( 68) 00:09:44.298 11191.532 - 11241.945: 50.6320% ( 70) 00:09:44.298 11241.945 - 11292.357: 51.1850% ( 63) 00:09:44.298 11292.357 - 11342.769: 51.7293% ( 62) 00:09:44.298 11342.769 - 11393.182: 52.2472% ( 59) 00:09:44.298 11393.182 - 11443.594: 52.8090% ( 64) 00:09:44.298 11443.594 - 11494.006: 53.3620% ( 63) 00:09:44.298 11494.006 - 11544.418: 53.8448% ( 55) 00:09:44.298 11544.418 - 11594.831: 54.3803% ( 61) 00:09:44.298 11594.831 - 11645.243: 54.8982% ( 59) 00:09:44.298 11645.243 - 11695.655: 55.4073% ( 58) 00:09:44.298 11695.655 - 11746.068: 55.9340% ( 60) 00:09:44.298 11746.068 - 11796.480: 56.4607% ( 60) 00:09:44.298 11796.480 - 11846.892: 56.9522% ( 56) 00:09:44.298 11846.892 - 11897.305: 57.5228% ( 65) 00:09:44.298 11897.305 - 11947.717: 58.1461% ( 71) 00:09:44.298 11947.717 - 11998.129: 58.7166% ( 65) 00:09:44.298 11998.129 - 12048.542: 59.1907% ( 54) 00:09:44.298 12048.542 - 12098.954: 59.5681% ( 43) 00:09:44.298 12098.954 - 12149.366: 60.0070% ( 50) 00:09:44.298 12149.366 - 12199.778: 60.3494% ( 39) 00:09:44.298 12199.778 - 12250.191: 60.6917% ( 39) 00:09:44.298 12250.191 - 12300.603: 61.0604% ( 42) 00:09:44.298 12300.603 - 12351.015: 61.4642% ( 46) 00:09:44.298 12351.015 - 12401.428: 61.8153% ( 40) 00:09:44.298 12401.428 - 12451.840: 62.1752% ( 41) 00:09:44.298 12451.840 - 12502.252: 62.5000% ( 37) 00:09:44.298 12502.252 - 12552.665: 62.8599% ( 41) 00:09:44.298 12552.665 - 12603.077: 63.2637% ( 46) 00:09:44.298 12603.077 - 12653.489: 63.6938% ( 49) 00:09:44.298 12653.489 - 12703.902: 64.0801% ( 44) 00:09:44.298 12703.902 - 12754.314: 64.5365% ( 52) 00:09:44.298 12754.314 - 12804.726: 64.9666% ( 49) 00:09:44.298 12804.726 - 12855.138: 65.4582% ( 56) 00:09:44.298 12855.138 - 12905.551: 66.0025% ( 62) 00:09:44.298 12905.551 - 13006.375: 67.0295% ( 117) 00:09:44.298 13006.375 - 13107.200: 67.9775% ( 108) 00:09:44.298 13107.200 - 13208.025: 68.9782% ( 114) 00:09:44.298 13208.025 - 13308.849: 70.1106% ( 129) 00:09:44.298 13308.849 - 13409.674: 71.3571% ( 142) 00:09:44.298 13409.674 - 13510.498: 72.4982% ( 130) 00:09:44.298 13510.498 - 13611.323: 73.5779% ( 123) 00:09:44.298 13611.323 - 13712.148: 74.9034% ( 151) 00:09:44.298 13712.148 - 13812.972: 76.3079% ( 160) 00:09:44.298 13812.972 - 13913.797: 77.5720% ( 144) 00:09:44.298 13913.797 - 14014.622: 78.6692% ( 125) 00:09:44.298 14014.622 - 14115.446: 79.8367% ( 133) 00:09:44.298 14115.446 - 14216.271: 81.0393% ( 137) 00:09:44.298 14216.271 - 14317.095: 82.2156% ( 134) 00:09:44.298 14317.095 - 14417.920: 83.2953% ( 123) 00:09:44.298 14417.920 - 14518.745: 84.4101% ( 127) 00:09:44.298 14518.745 - 14619.569: 85.4284% ( 116) 00:09:44.298 14619.569 - 14720.394: 86.4027% ( 111) 00:09:44.298 14720.394 - 14821.218: 87.3683% ( 110) 00:09:44.299 14821.218 - 14922.043: 88.2900% ( 105) 00:09:44.299 14922.043 - 15022.868: 89.1766% ( 101) 00:09:44.299 15022.868 - 15123.692: 90.0105% ( 95) 00:09:44.299 15123.692 - 15224.517: 90.7216% ( 81) 00:09:44.299 15224.517 - 15325.342: 91.4326% ( 81) 00:09:44.299 15325.342 - 15426.166: 92.0119% ( 66) 00:09:44.299 15426.166 - 15526.991: 92.5825% ( 65) 00:09:44.299 15526.991 - 15627.815: 93.1355% ( 63) 00:09:44.299 15627.815 - 15728.640: 93.6008% ( 53) 00:09:44.299 15728.640 - 15829.465: 94.0221% ( 48) 00:09:44.299 15829.465 - 15930.289: 94.4171% ( 45) 00:09:44.299 15930.289 - 16031.114: 94.7595% ( 39) 00:09:44.299 16031.114 - 16131.938: 95.0579% ( 34) 00:09:44.299 16131.938 - 16232.763: 95.3739% ( 36) 00:09:44.299 16232.763 - 16333.588: 95.5758% ( 23) 00:09:44.299 16333.588 - 16434.412: 95.7602% ( 21) 00:09:44.299 16434.412 - 16535.237: 95.9182% ( 18) 00:09:44.299 16535.237 - 16636.062: 96.0499% ( 15) 00:09:44.299 16636.062 - 16736.886: 96.1728% ( 14) 00:09:44.299 16736.886 - 16837.711: 96.3220% ( 17) 00:09:44.299 16837.711 - 16938.535: 96.4975% ( 20) 00:09:44.299 16938.535 - 17039.360: 96.6731% ( 20) 00:09:44.299 17039.360 - 17140.185: 96.8662% ( 22) 00:09:44.299 17140.185 - 17241.009: 97.0506% ( 21) 00:09:44.299 17241.009 - 17341.834: 97.2173% ( 19) 00:09:44.299 17341.834 - 17442.658: 97.3841% ( 19) 00:09:44.299 17442.658 - 17543.483: 97.5246% ( 16) 00:09:44.299 17543.483 - 17644.308: 97.6562% ( 15) 00:09:44.299 17644.308 - 17745.132: 97.7879% ( 15) 00:09:44.299 17745.132 - 17845.957: 97.9196% ( 15) 00:09:44.299 17845.957 - 17946.782: 98.0600% ( 16) 00:09:44.299 17946.782 - 18047.606: 98.1829% ( 14) 00:09:44.299 18047.606 - 18148.431: 98.2795% ( 11) 00:09:44.299 18148.431 - 18249.255: 98.3761% ( 11) 00:09:44.299 18249.255 - 18350.080: 98.4375% ( 7) 00:09:44.299 18350.080 - 18450.905: 98.4814% ( 5) 00:09:44.299 18450.905 - 18551.729: 98.5253% ( 5) 00:09:44.299 18551.729 - 18652.554: 98.5779% ( 6) 00:09:44.299 18652.554 - 18753.378: 98.6218% ( 5) 00:09:44.299 18753.378 - 18854.203: 98.6657% ( 5) 00:09:44.299 18854.203 - 18955.028: 98.6921% ( 3) 00:09:44.299 18955.028 - 19055.852: 98.7184% ( 3) 00:09:44.299 19055.852 - 19156.677: 98.7447% ( 3) 00:09:44.299 19156.677 - 19257.502: 98.7711% ( 3) 00:09:44.299 19257.502 - 19358.326: 98.8062% ( 4) 00:09:44.299 19358.326 - 19459.151: 98.8325% ( 3) 00:09:44.299 19459.151 - 19559.975: 98.8676% ( 4) 00:09:44.299 19559.975 - 19660.800: 98.8764% ( 1) 00:09:44.299 32465.526 - 32667.175: 98.9203% ( 5) 00:09:44.299 32667.175 - 32868.825: 98.9730% ( 6) 00:09:44.299 32868.825 - 33070.474: 99.0256% ( 6) 00:09:44.299 33070.474 - 33272.123: 99.0871% ( 7) 00:09:44.299 33272.123 - 33473.772: 99.1397% ( 6) 00:09:44.299 33473.772 - 33675.422: 99.1924% ( 6) 00:09:44.299 33675.422 - 33877.071: 99.2451% ( 6) 00:09:44.299 33877.071 - 34078.720: 99.2978% ( 6) 00:09:44.299 34078.720 - 34280.369: 99.3592% ( 7) 00:09:44.299 34280.369 - 34482.018: 99.4119% ( 6) 00:09:44.299 34482.018 - 34683.668: 99.4645% ( 6) 00:09:44.299 34683.668 - 34885.317: 99.5172% ( 6) 00:09:44.299 34885.317 - 35086.966: 99.5611% ( 5) 00:09:44.299 35086.966 - 35288.615: 99.6138% ( 6) 00:09:44.299 35288.615 - 35490.265: 99.6664% ( 6) 00:09:44.299 35490.265 - 35691.914: 99.7191% ( 6) 00:09:44.299 35691.914 - 35893.563: 99.7718% ( 6) 00:09:44.299 35893.563 - 36095.212: 99.8332% ( 7) 00:09:44.299 36095.212 - 36296.862: 99.8859% ( 6) 00:09:44.299 36296.862 - 36498.511: 99.9298% ( 5) 00:09:44.299 36498.511 - 36700.160: 99.9824% ( 6) 00:09:44.299 36700.160 - 36901.809: 100.0000% ( 2) 00:09:44.299 00:09:44.299 Latency histogram for PCIE (0000:00:08.0) NSID 2 from core 0: 00:09:44.299 ============================================================================== 00:09:44.299 Range in us Cumulative IO count 00:09:44.299 5595.766 - 5620.972: 0.1668% ( 19) 00:09:44.299 5620.972 - 5646.178: 0.6145% ( 51) 00:09:44.299 5646.178 - 5671.385: 1.0270% ( 47) 00:09:44.299 5671.385 - 5696.591: 1.4572% ( 49) 00:09:44.299 5696.591 - 5721.797: 1.8258% ( 42) 00:09:44.299 5721.797 - 5747.003: 2.1155% ( 33) 00:09:44.299 5747.003 - 5772.209: 2.4052% ( 33) 00:09:44.299 5772.209 - 5797.415: 2.6598% ( 29) 00:09:44.299 5797.415 - 5822.622: 2.9407% ( 32) 00:09:44.299 5822.622 - 5847.828: 3.2654% ( 37) 00:09:44.299 5847.828 - 5873.034: 3.6078% ( 39) 00:09:44.299 5873.034 - 5898.240: 3.9589% ( 40) 00:09:44.299 5898.240 - 5923.446: 4.2925% ( 38) 00:09:44.299 5923.446 - 5948.652: 4.6173% ( 37) 00:09:44.299 5948.652 - 5973.858: 4.9772% ( 41) 00:09:44.299 5973.858 - 5999.065: 5.3195% ( 39) 00:09:44.299 5999.065 - 6024.271: 5.6355% ( 36) 00:09:44.299 6024.271 - 6049.477: 5.9691% ( 38) 00:09:44.299 6049.477 - 6074.683: 6.3202% ( 40) 00:09:44.299 6074.683 - 6099.889: 6.6538% ( 38) 00:09:44.299 6099.889 - 6125.095: 7.0312% ( 43) 00:09:44.299 6125.095 - 6150.302: 7.3560% ( 37) 00:09:44.299 6150.302 - 6175.508: 7.6721% ( 36) 00:09:44.299 6175.508 - 6200.714: 8.0232% ( 40) 00:09:44.299 6200.714 - 6225.920: 8.3655% ( 39) 00:09:44.299 6225.920 - 6251.126: 8.6815% ( 36) 00:09:44.299 6251.126 - 6276.332: 9.0151% ( 38) 00:09:44.299 6276.332 - 6301.538: 9.3399% ( 37) 00:09:44.299 6301.538 - 6326.745: 9.6735% ( 38) 00:09:44.299 6326.745 - 6351.951: 9.9807% ( 35) 00:09:44.299 6351.951 - 6377.157: 10.3230% ( 39) 00:09:44.299 6377.157 - 6402.363: 10.6654% ( 39) 00:09:44.299 6402.363 - 6427.569: 10.9814% ( 36) 00:09:44.299 6427.569 - 6452.775: 11.3237% ( 39) 00:09:44.299 6452.775 - 6503.188: 11.9909% ( 76) 00:09:44.299 6503.188 - 6553.600: 12.6668% ( 77) 00:09:44.299 6553.600 - 6604.012: 13.3427% ( 77) 00:09:44.299 6604.012 - 6654.425: 14.0449% ( 80) 00:09:44.299 6654.425 - 6704.837: 14.7472% ( 80) 00:09:44.299 6704.837 - 6755.249: 15.4670% ( 82) 00:09:44.299 6755.249 - 6805.662: 16.1868% ( 82) 00:09:44.299 6805.662 - 6856.074: 16.9066% ( 82) 00:09:44.299 6856.074 - 6906.486: 17.6440% ( 84) 00:09:44.299 6906.486 - 6956.898: 18.3462% ( 80) 00:09:44.299 6956.898 - 7007.311: 19.0923% ( 85) 00:09:44.299 7007.311 - 7057.723: 19.8297% ( 84) 00:09:44.299 7057.723 - 7108.135: 20.6022% ( 88) 00:09:44.299 7108.135 - 7158.548: 21.3308% ( 83) 00:09:44.299 7158.548 - 7208.960: 22.0769% ( 85) 00:09:44.299 7208.960 - 7259.372: 22.8055% ( 83) 00:09:44.299 7259.372 - 7309.785: 23.5516% ( 85) 00:09:44.299 7309.785 - 7360.197: 24.3153% ( 87) 00:09:44.299 7360.197 - 7410.609: 25.0263% ( 81) 00:09:44.299 7410.609 - 7461.022: 25.5530% ( 60) 00:09:44.299 7461.022 - 7511.434: 25.7812% ( 26) 00:09:44.299 7511.434 - 7561.846: 25.9656% ( 21) 00:09:44.299 7561.846 - 7612.258: 26.0621% ( 11) 00:09:44.299 7612.258 - 7662.671: 26.1763% ( 13) 00:09:44.299 7662.671 - 7713.083: 26.2992% ( 14) 00:09:44.299 7713.083 - 7763.495: 26.3957% ( 11) 00:09:44.299 7763.495 - 7813.908: 26.4835% ( 10) 00:09:44.299 7813.908 - 7864.320: 26.5625% ( 9) 00:09:44.299 7864.320 - 7914.732: 26.6503% ( 10) 00:09:44.299 7914.732 - 7965.145: 26.7205% ( 8) 00:09:44.299 7965.145 - 8015.557: 26.8083% ( 10) 00:09:44.299 8015.557 - 8065.969: 26.8961% ( 10) 00:09:44.299 8065.969 - 8116.382: 26.9751% ( 9) 00:09:44.299 8116.382 - 8166.794: 27.0541% ( 9) 00:09:44.299 8166.794 - 8217.206: 27.1682% ( 13) 00:09:44.299 8217.206 - 8267.618: 27.2735% ( 12) 00:09:44.299 8267.618 - 8318.031: 27.3789% ( 12) 00:09:44.299 8318.031 - 8368.443: 27.5105% ( 15) 00:09:44.299 8368.443 - 8418.855: 27.6159% ( 12) 00:09:44.299 8418.855 - 8469.268: 27.7739% ( 18) 00:09:44.299 8469.268 - 8519.680: 27.9319% ( 18) 00:09:44.299 8519.680 - 8570.092: 28.1074% ( 20) 00:09:44.299 8570.092 - 8620.505: 28.2742% ( 19) 00:09:44.299 8620.505 - 8670.917: 28.3883% ( 13) 00:09:44.299 8670.917 - 8721.329: 28.5200% ( 15) 00:09:44.299 8721.329 - 8771.742: 28.6517% ( 15) 00:09:44.299 8771.742 - 8822.154: 28.7921% ( 16) 00:09:44.299 8822.154 - 8872.566: 28.9765% ( 21) 00:09:44.299 8872.566 - 8922.978: 29.1871% ( 24) 00:09:44.299 8922.978 - 8973.391: 29.3978% ( 24) 00:09:44.299 8973.391 - 9023.803: 29.6261% ( 26) 00:09:44.299 9023.803 - 9074.215: 29.8367% ( 24) 00:09:44.299 9074.215 - 9124.628: 30.0474% ( 24) 00:09:44.299 9124.628 - 9175.040: 30.3020% ( 29) 00:09:44.299 9175.040 - 9225.452: 30.5741% ( 31) 00:09:44.299 9225.452 - 9275.865: 30.8462% ( 31) 00:09:44.299 9275.865 - 9326.277: 31.1271% ( 32) 00:09:44.299 9326.277 - 9376.689: 31.4080% ( 32) 00:09:44.299 9376.689 - 9427.102: 31.6713% ( 30) 00:09:44.299 9427.102 - 9477.514: 31.9874% ( 36) 00:09:44.299 9477.514 - 9527.926: 32.2770% ( 33) 00:09:44.299 9527.926 - 9578.338: 32.6018% ( 37) 00:09:44.299 9578.338 - 9628.751: 32.9793% ( 43) 00:09:44.299 9628.751 - 9679.163: 33.3831% ( 46) 00:09:44.299 9679.163 - 9729.575: 33.7518% ( 42) 00:09:44.299 9729.575 - 9779.988: 34.1907% ( 50) 00:09:44.299 9779.988 - 9830.400: 34.6032% ( 47) 00:09:44.299 9830.400 - 9880.812: 35.1124% ( 58) 00:09:44.299 9880.812 - 9931.225: 35.6829% ( 65) 00:09:44.299 9931.225 - 9981.637: 36.2360% ( 63) 00:09:44.299 9981.637 - 10032.049: 36.8329% ( 68) 00:09:44.299 10032.049 - 10082.462: 37.4122% ( 66) 00:09:44.299 10082.462 - 10132.874: 38.0179% ( 69) 00:09:44.299 10132.874 - 10183.286: 38.6675% ( 74) 00:09:44.299 10183.286 - 10233.698: 39.3346% ( 76) 00:09:44.299 10233.698 - 10284.111: 39.9491% ( 70) 00:09:44.299 10284.111 - 10334.523: 40.5548% ( 69) 00:09:44.299 10334.523 - 10384.935: 41.1692% ( 70) 00:09:44.299 10384.935 - 10435.348: 41.7135% ( 62) 00:09:44.299 10435.348 - 10485.760: 42.2753% ( 64) 00:09:44.299 10485.760 - 10536.172: 42.8546% ( 66) 00:09:44.300 10536.172 - 10586.585: 43.4867% ( 72) 00:09:44.300 10586.585 - 10636.997: 44.0748% ( 67) 00:09:44.300 10636.997 - 10687.409: 44.6717% ( 68) 00:09:44.300 10687.409 - 10737.822: 45.2511% ( 66) 00:09:44.300 10737.822 - 10788.234: 45.8831% ( 72) 00:09:44.300 10788.234 - 10838.646: 46.5239% ( 73) 00:09:44.300 10838.646 - 10889.058: 47.1208% ( 68) 00:09:44.300 10889.058 - 10939.471: 47.6914% ( 65) 00:09:44.300 10939.471 - 10989.883: 48.2532% ( 64) 00:09:44.300 10989.883 - 11040.295: 48.8501% ( 68) 00:09:44.300 11040.295 - 11090.708: 49.3855% ( 61) 00:09:44.300 11090.708 - 11141.120: 49.9122% ( 60) 00:09:44.300 11141.120 - 11191.532: 50.4477% ( 61) 00:09:44.300 11191.532 - 11241.945: 50.9480% ( 57) 00:09:44.300 11241.945 - 11292.357: 51.4747% ( 60) 00:09:44.300 11292.357 - 11342.769: 51.9838% ( 58) 00:09:44.300 11342.769 - 11393.182: 52.5281% ( 62) 00:09:44.300 11393.182 - 11443.594: 53.0197% ( 56) 00:09:44.300 11443.594 - 11494.006: 53.5463% ( 60) 00:09:44.300 11494.006 - 11544.418: 54.1345% ( 67) 00:09:44.300 11544.418 - 11594.831: 54.7051% ( 65) 00:09:44.300 11594.831 - 11645.243: 55.2844% ( 66) 00:09:44.300 11645.243 - 11695.655: 55.8199% ( 61) 00:09:44.300 11695.655 - 11746.068: 56.3466% ( 60) 00:09:44.300 11746.068 - 11796.480: 56.8645% ( 59) 00:09:44.300 11796.480 - 11846.892: 57.3736% ( 58) 00:09:44.300 11846.892 - 11897.305: 57.8564% ( 55) 00:09:44.300 11897.305 - 11947.717: 58.3480% ( 56) 00:09:44.300 11947.717 - 11998.129: 58.7693% ( 48) 00:09:44.300 11998.129 - 12048.542: 59.1819% ( 47) 00:09:44.300 12048.542 - 12098.954: 59.5681% ( 44) 00:09:44.300 12098.954 - 12149.366: 59.9280% ( 41) 00:09:44.300 12149.366 - 12199.778: 60.2879% ( 41) 00:09:44.300 12199.778 - 12250.191: 60.6215% ( 38) 00:09:44.300 12250.191 - 12300.603: 60.9375% ( 36) 00:09:44.300 12300.603 - 12351.015: 61.3676% ( 49) 00:09:44.300 12351.015 - 12401.428: 61.8855% ( 59) 00:09:44.300 12401.428 - 12451.840: 62.3157% ( 49) 00:09:44.300 12451.840 - 12502.252: 62.7633% ( 51) 00:09:44.300 12502.252 - 12552.665: 63.1057% ( 39) 00:09:44.300 12552.665 - 12603.077: 63.5270% ( 48) 00:09:44.300 12603.077 - 12653.489: 63.9747% ( 51) 00:09:44.300 12653.489 - 12703.902: 64.3961% ( 48) 00:09:44.300 12703.902 - 12754.314: 64.7823% ( 44) 00:09:44.300 12754.314 - 12804.726: 65.1949% ( 47) 00:09:44.300 12804.726 - 12855.138: 65.6162% ( 48) 00:09:44.300 12855.138 - 12905.551: 66.1078% ( 56) 00:09:44.300 12905.551 - 13006.375: 66.9768% ( 99) 00:09:44.300 13006.375 - 13107.200: 67.8634% ( 101) 00:09:44.300 13107.200 - 13208.025: 68.8904% ( 117) 00:09:44.300 13208.025 - 13308.849: 69.9350% ( 119) 00:09:44.300 13308.849 - 13409.674: 70.9621% ( 117) 00:09:44.300 13409.674 - 13510.498: 72.0857% ( 128) 00:09:44.300 13510.498 - 13611.323: 73.2707% ( 135) 00:09:44.300 13611.323 - 13712.148: 74.6928% ( 162) 00:09:44.300 13712.148 - 13812.972: 75.8778% ( 135) 00:09:44.300 13812.972 - 13913.797: 77.0014% ( 128) 00:09:44.300 13913.797 - 14014.622: 78.1074% ( 126) 00:09:44.300 14014.622 - 14115.446: 79.2135% ( 126) 00:09:44.300 14115.446 - 14216.271: 80.3195% ( 126) 00:09:44.300 14216.271 - 14317.095: 81.4343% ( 127) 00:09:44.300 14317.095 - 14417.920: 82.5843% ( 131) 00:09:44.300 14417.920 - 14518.745: 83.6552% ( 122) 00:09:44.300 14518.745 - 14619.569: 84.6296% ( 111) 00:09:44.300 14619.569 - 14720.394: 85.5864% ( 109) 00:09:44.300 14720.394 - 14821.218: 86.4817% ( 102) 00:09:44.300 14821.218 - 14922.043: 87.3859% ( 103) 00:09:44.300 14922.043 - 15022.868: 88.2637% ( 100) 00:09:44.300 15022.868 - 15123.692: 89.1327% ( 99) 00:09:44.300 15123.692 - 15224.517: 89.9228% ( 90) 00:09:44.300 15224.517 - 15325.342: 90.7303% ( 92) 00:09:44.300 15325.342 - 15426.166: 91.3887% ( 75) 00:09:44.300 15426.166 - 15526.991: 92.0207% ( 72) 00:09:44.300 15526.991 - 15627.815: 92.6001% ( 66) 00:09:44.300 15627.815 - 15728.640: 93.2145% ( 70) 00:09:44.300 15728.640 - 15829.465: 93.6622% ( 51) 00:09:44.300 15829.465 - 15930.289: 94.1187% ( 52) 00:09:44.300 15930.289 - 16031.114: 94.4961% ( 43) 00:09:44.300 16031.114 - 16131.938: 94.8209% ( 37) 00:09:44.300 16131.938 - 16232.763: 95.0843% ( 30) 00:09:44.300 16232.763 - 16333.588: 95.3652% ( 32) 00:09:44.300 16333.588 - 16434.412: 95.6285% ( 30) 00:09:44.300 16434.412 - 16535.237: 95.8831% ( 29) 00:09:44.300 16535.237 - 16636.062: 96.1903% ( 35) 00:09:44.300 16636.062 - 16736.886: 96.4010% ( 24) 00:09:44.300 16736.886 - 16837.711: 96.6380% ( 27) 00:09:44.300 16837.711 - 16938.535: 96.8487% ( 24) 00:09:44.300 16938.535 - 17039.360: 97.0593% ( 24) 00:09:44.300 17039.360 - 17140.185: 97.2788% ( 25) 00:09:44.300 17140.185 - 17241.009: 97.4807% ( 23) 00:09:44.300 17241.009 - 17341.834: 97.6299% ( 17) 00:09:44.300 17341.834 - 17442.658: 97.7440% ( 13) 00:09:44.300 17442.658 - 17543.483: 97.8581% ( 13) 00:09:44.300 17543.483 - 17644.308: 97.9898% ( 15) 00:09:44.300 17644.308 - 17745.132: 98.1303% ( 16) 00:09:44.300 17745.132 - 17845.957: 98.2268% ( 11) 00:09:44.300 17845.957 - 17946.782: 98.3146% ( 10) 00:09:44.300 17946.782 - 18047.606: 98.3848% ( 8) 00:09:44.300 18047.606 - 18148.431: 98.4199% ( 4) 00:09:44.300 18148.431 - 18249.255: 98.4638% ( 5) 00:09:44.300 18249.255 - 18350.080: 98.5077% ( 5) 00:09:44.300 18350.080 - 18450.905: 98.5516% ( 5) 00:09:44.300 18450.905 - 18551.729: 98.5867% ( 4) 00:09:44.300 18551.729 - 18652.554: 98.6306% ( 5) 00:09:44.300 18652.554 - 18753.378: 98.6657% ( 4) 00:09:44.300 18753.378 - 18854.203: 98.7096% ( 5) 00:09:44.300 18854.203 - 18955.028: 98.7535% ( 5) 00:09:44.300 18955.028 - 19055.852: 98.7886% ( 4) 00:09:44.300 19055.852 - 19156.677: 98.8237% ( 4) 00:09:44.300 19156.677 - 19257.502: 98.8676% ( 5) 00:09:44.300 19257.502 - 19358.326: 98.8764% ( 1) 00:09:44.300 31457.280 - 31658.929: 98.9291% ( 6) 00:09:44.300 31658.929 - 31860.578: 98.9993% ( 8) 00:09:44.300 31860.578 - 32062.228: 99.0520% ( 6) 00:09:44.300 32062.228 - 32263.877: 99.1046% ( 6) 00:09:44.300 32263.877 - 32465.526: 99.1573% ( 6) 00:09:44.300 32465.526 - 32667.175: 99.2100% ( 6) 00:09:44.300 32667.175 - 32868.825: 99.2626% ( 6) 00:09:44.300 32868.825 - 33070.474: 99.3153% ( 6) 00:09:44.300 33070.474 - 33272.123: 99.3680% ( 6) 00:09:44.300 33272.123 - 33473.772: 99.4206% ( 6) 00:09:44.300 33473.772 - 33675.422: 99.4733% ( 6) 00:09:44.300 33675.422 - 33877.071: 99.5260% ( 6) 00:09:44.300 33877.071 - 34078.720: 99.5699% ( 5) 00:09:44.300 34078.720 - 34280.369: 99.6138% ( 5) 00:09:44.300 34280.369 - 34482.018: 99.6752% ( 7) 00:09:44.300 34482.018 - 34683.668: 99.7279% ( 6) 00:09:44.300 34683.668 - 34885.317: 99.7805% ( 6) 00:09:44.300 34885.317 - 35086.966: 99.8332% ( 6) 00:09:44.300 35086.966 - 35288.615: 99.8859% ( 6) 00:09:44.300 35288.615 - 35490.265: 99.9473% ( 7) 00:09:44.300 35490.265 - 35691.914: 99.9912% ( 5) 00:09:44.300 35691.914 - 35893.563: 100.0000% ( 1) 00:09:44.300 00:09:44.300 Latency histogram for PCIE (0000:00:08.0) NSID 3 from core 0: 00:09:44.300 ============================================================================== 00:09:44.300 Range in us Cumulative IO count 00:09:44.300 5570.560 - 5595.766: 0.0966% ( 11) 00:09:44.300 5595.766 - 5620.972: 0.4038% ( 35) 00:09:44.300 5620.972 - 5646.178: 0.8164% ( 47) 00:09:44.300 5646.178 - 5671.385: 1.0885% ( 31) 00:09:44.300 5671.385 - 5696.591: 1.4835% ( 45) 00:09:44.300 5696.591 - 5721.797: 1.7820% ( 34) 00:09:44.300 5721.797 - 5747.003: 2.0629% ( 32) 00:09:44.300 5747.003 - 5772.209: 2.3525% ( 33) 00:09:44.300 5772.209 - 5797.415: 2.6861% ( 38) 00:09:44.300 5797.415 - 5822.622: 2.9933% ( 35) 00:09:44.300 5822.622 - 5847.828: 3.3093% ( 36) 00:09:44.300 5847.828 - 5873.034: 3.6692% ( 41) 00:09:44.300 5873.034 - 5898.240: 4.0204% ( 40) 00:09:44.300 5898.240 - 5923.446: 4.3276% ( 35) 00:09:44.300 5923.446 - 5948.652: 4.6612% ( 38) 00:09:44.300 5948.652 - 5973.858: 4.9947% ( 38) 00:09:44.300 5973.858 - 5999.065: 5.3546% ( 41) 00:09:44.300 5999.065 - 6024.271: 5.6706% ( 36) 00:09:44.300 6024.271 - 6049.477: 6.0042% ( 38) 00:09:44.300 6049.477 - 6074.683: 6.3553% ( 40) 00:09:44.300 6074.683 - 6099.889: 6.7328% ( 43) 00:09:44.300 6099.889 - 6125.095: 7.0137% ( 32) 00:09:44.300 6125.095 - 6150.302: 7.3648% ( 40) 00:09:44.300 6150.302 - 6175.508: 7.7159% ( 40) 00:09:44.300 6175.508 - 6200.714: 8.0758% ( 41) 00:09:44.300 6200.714 - 6225.920: 8.4006% ( 37) 00:09:44.300 6225.920 - 6251.126: 8.7605% ( 41) 00:09:44.300 6251.126 - 6276.332: 9.0853% ( 37) 00:09:44.300 6276.332 - 6301.538: 9.4277% ( 39) 00:09:44.300 6301.538 - 6326.745: 9.7788% ( 40) 00:09:44.300 6326.745 - 6351.951: 10.1211% ( 39) 00:09:44.300 6351.951 - 6377.157: 10.4635% ( 39) 00:09:44.300 6377.157 - 6402.363: 10.8234% ( 41) 00:09:44.300 6402.363 - 6427.569: 11.1833% ( 41) 00:09:44.300 6427.569 - 6452.775: 11.5081% ( 37) 00:09:44.300 6452.775 - 6503.188: 12.2367% ( 83) 00:09:44.300 6503.188 - 6553.600: 12.9038% ( 76) 00:09:44.300 6553.600 - 6604.012: 13.6148% ( 81) 00:09:44.300 6604.012 - 6654.425: 14.3434% ( 83) 00:09:44.300 6654.425 - 6704.837: 15.0720% ( 83) 00:09:44.300 6704.837 - 6755.249: 15.7654% ( 79) 00:09:44.300 6755.249 - 6805.662: 16.5028% ( 84) 00:09:44.300 6805.662 - 6856.074: 17.1875% ( 78) 00:09:44.300 6856.074 - 6906.486: 17.9073% ( 82) 00:09:44.300 6906.486 - 6956.898: 18.6447% ( 84) 00:09:44.300 6956.898 - 7007.311: 19.3645% ( 82) 00:09:44.300 7007.311 - 7057.723: 20.0930% ( 83) 00:09:44.300 7057.723 - 7108.135: 20.8129% ( 82) 00:09:44.300 7108.135 - 7158.548: 21.5239% ( 81) 00:09:44.300 7158.548 - 7208.960: 22.2437% ( 82) 00:09:44.300 7208.960 - 7259.372: 22.9547% ( 81) 00:09:44.300 7259.372 - 7309.785: 23.6745% ( 82) 00:09:44.300 7309.785 - 7360.197: 24.4119% ( 84) 00:09:44.301 7360.197 - 7410.609: 25.1141% ( 80) 00:09:44.301 7410.609 - 7461.022: 25.5969% ( 55) 00:09:44.301 7461.022 - 7511.434: 25.8251% ( 26) 00:09:44.301 7511.434 - 7561.846: 26.0183% ( 22) 00:09:44.301 7561.846 - 7612.258: 26.1236% ( 12) 00:09:44.301 7612.258 - 7662.671: 26.2377% ( 13) 00:09:44.301 7662.671 - 7713.083: 26.3255% ( 10) 00:09:44.301 7713.083 - 7763.495: 26.4221% ( 11) 00:09:44.301 7763.495 - 7813.908: 26.4923% ( 8) 00:09:44.301 7813.908 - 7864.320: 26.5713% ( 9) 00:09:44.301 7864.320 - 7914.732: 26.6503% ( 9) 00:09:44.301 7914.732 - 7965.145: 26.7381% ( 10) 00:09:44.301 7965.145 - 8015.557: 26.8258% ( 10) 00:09:44.301 8015.557 - 8065.969: 26.9136% ( 10) 00:09:44.301 8065.969 - 8116.382: 26.9926% ( 9) 00:09:44.301 8116.382 - 8166.794: 27.0541% ( 7) 00:09:44.301 8166.794 - 8217.206: 27.1331% ( 9) 00:09:44.301 8217.206 - 8267.618: 27.2209% ( 10) 00:09:44.301 8267.618 - 8318.031: 27.2999% ( 9) 00:09:44.301 8318.031 - 8368.443: 27.3876% ( 10) 00:09:44.301 8368.443 - 8418.855: 27.4754% ( 10) 00:09:44.301 8418.855 - 8469.268: 27.5456% ( 8) 00:09:44.301 8469.268 - 8519.680: 27.6246% ( 9) 00:09:44.301 8519.680 - 8570.092: 27.7388% ( 13) 00:09:44.301 8570.092 - 8620.505: 27.8441% ( 12) 00:09:44.301 8620.505 - 8670.917: 27.9582% ( 13) 00:09:44.301 8670.917 - 8721.329: 28.0636% ( 12) 00:09:44.301 8721.329 - 8771.742: 28.1689% ( 12) 00:09:44.301 8771.742 - 8822.154: 28.2918% ( 14) 00:09:44.301 8822.154 - 8872.566: 28.4147% ( 14) 00:09:44.301 8872.566 - 8922.978: 28.5815% ( 19) 00:09:44.301 8922.978 - 8973.391: 28.7482% ( 19) 00:09:44.301 8973.391 - 9023.803: 28.9150% ( 19) 00:09:44.301 9023.803 - 9074.215: 29.1257% ( 24) 00:09:44.301 9074.215 - 9124.628: 29.3539% ( 26) 00:09:44.301 9124.628 - 9175.040: 29.5646% ( 24) 00:09:44.301 9175.040 - 9225.452: 29.7841% ( 25) 00:09:44.301 9225.452 - 9275.865: 30.0211% ( 27) 00:09:44.301 9275.865 - 9326.277: 30.2932% ( 31) 00:09:44.301 9326.277 - 9376.689: 30.5390% ( 28) 00:09:44.301 9376.689 - 9427.102: 30.8287% ( 33) 00:09:44.301 9427.102 - 9477.514: 31.1447% ( 36) 00:09:44.301 9477.514 - 9527.926: 31.4607% ( 36) 00:09:44.301 9527.926 - 9578.338: 31.8030% ( 39) 00:09:44.301 9578.338 - 9628.751: 32.1893% ( 44) 00:09:44.301 9628.751 - 9679.163: 32.6545% ( 53) 00:09:44.301 9679.163 - 9729.575: 33.1636% ( 58) 00:09:44.301 9729.575 - 9779.988: 33.6113% ( 51) 00:09:44.301 9779.988 - 9830.400: 34.1204% ( 58) 00:09:44.301 9830.400 - 9880.812: 34.6120% ( 56) 00:09:44.301 9880.812 - 9931.225: 35.1826% ( 65) 00:09:44.301 9931.225 - 9981.637: 35.7707% ( 67) 00:09:44.301 9981.637 - 10032.049: 36.3588% ( 67) 00:09:44.301 10032.049 - 10082.462: 37.0084% ( 74) 00:09:44.301 10082.462 - 10132.874: 37.6317% ( 71) 00:09:44.301 10132.874 - 10183.286: 38.2725% ( 73) 00:09:44.301 10183.286 - 10233.698: 38.8869% ( 70) 00:09:44.301 10233.698 - 10284.111: 39.4751% ( 67) 00:09:44.301 10284.111 - 10334.523: 40.1598% ( 78) 00:09:44.301 10334.523 - 10384.935: 40.7742% ( 70) 00:09:44.301 10384.935 - 10435.348: 41.3799% ( 69) 00:09:44.301 10435.348 - 10485.760: 42.0032% ( 71) 00:09:44.301 10485.760 - 10536.172: 42.6440% ( 73) 00:09:44.301 10536.172 - 10586.585: 43.3199% ( 77) 00:09:44.301 10586.585 - 10636.997: 43.9870% ( 76) 00:09:44.301 10636.997 - 10687.409: 44.6541% ( 76) 00:09:44.301 10687.409 - 10737.822: 45.2686% ( 70) 00:09:44.301 10737.822 - 10788.234: 45.9182% ( 74) 00:09:44.301 10788.234 - 10838.646: 46.5502% ( 72) 00:09:44.301 10838.646 - 10889.058: 47.2086% ( 75) 00:09:44.301 10889.058 - 10939.471: 47.7879% ( 66) 00:09:44.301 10939.471 - 10989.883: 48.4112% ( 71) 00:09:44.301 10989.883 - 11040.295: 49.0344% ( 71) 00:09:44.301 11040.295 - 11090.708: 49.5962% ( 64) 00:09:44.301 11090.708 - 11141.120: 50.1843% ( 67) 00:09:44.301 11141.120 - 11191.532: 50.7900% ( 69) 00:09:44.301 11191.532 - 11241.945: 51.3343% ( 62) 00:09:44.301 11241.945 - 11292.357: 51.8610% ( 60) 00:09:44.301 11292.357 - 11342.769: 52.4491% ( 67) 00:09:44.301 11342.769 - 11393.182: 52.9758% ( 60) 00:09:44.301 11393.182 - 11443.594: 53.5639% ( 67) 00:09:44.301 11443.594 - 11494.006: 54.1345% ( 65) 00:09:44.301 11494.006 - 11544.418: 54.7577% ( 71) 00:09:44.301 11544.418 - 11594.831: 55.3283% ( 65) 00:09:44.301 11594.831 - 11645.243: 55.9077% ( 66) 00:09:44.301 11645.243 - 11695.655: 56.4607% ( 63) 00:09:44.301 11695.655 - 11746.068: 56.9961% ( 61) 00:09:44.301 11746.068 - 11796.480: 57.4438% ( 51) 00:09:44.301 11796.480 - 11846.892: 57.8915% ( 51) 00:09:44.301 11846.892 - 11897.305: 58.3480% ( 52) 00:09:44.301 11897.305 - 11947.717: 58.7781% ( 49) 00:09:44.301 11947.717 - 11998.129: 59.1380% ( 41) 00:09:44.301 11998.129 - 12048.542: 59.5506% ( 47) 00:09:44.301 12048.542 - 12098.954: 59.9105% ( 41) 00:09:44.301 12098.954 - 12149.366: 60.2791% ( 42) 00:09:44.301 12149.366 - 12199.778: 60.6654% ( 44) 00:09:44.301 12199.778 - 12250.191: 61.0428% ( 43) 00:09:44.301 12250.191 - 12300.603: 61.3676% ( 37) 00:09:44.301 12300.603 - 12351.015: 61.7714% ( 46) 00:09:44.301 12351.015 - 12401.428: 62.2191% ( 51) 00:09:44.301 12401.428 - 12451.840: 62.5439% ( 37) 00:09:44.301 12451.840 - 12502.252: 62.9213% ( 43) 00:09:44.301 12502.252 - 12552.665: 63.2374% ( 36) 00:09:44.301 12552.665 - 12603.077: 63.5446% ( 35) 00:09:44.301 12603.077 - 12653.489: 63.8694% ( 37) 00:09:44.301 12653.489 - 12703.902: 64.2029% ( 38) 00:09:44.301 12703.902 - 12754.314: 64.5629% ( 41) 00:09:44.301 12754.314 - 12804.726: 65.0018% ( 50) 00:09:44.301 12804.726 - 12855.138: 65.4758% ( 54) 00:09:44.301 12855.138 - 12905.551: 65.9586% ( 55) 00:09:44.301 12905.551 - 13006.375: 67.0909% ( 129) 00:09:44.301 13006.375 - 13107.200: 68.3287% ( 141) 00:09:44.301 13107.200 - 13208.025: 69.5137% ( 135) 00:09:44.301 13208.025 - 13308.849: 70.6812% ( 133) 00:09:44.301 13308.849 - 13409.674: 71.9101% ( 140) 00:09:44.301 13409.674 - 13510.498: 73.0952% ( 135) 00:09:44.301 13510.498 - 13611.323: 74.2100% ( 127) 00:09:44.301 13611.323 - 13712.148: 75.3687% ( 132) 00:09:44.301 13712.148 - 13812.972: 76.6239% ( 143) 00:09:44.301 13812.972 - 13913.797: 77.7037% ( 123) 00:09:44.301 13913.797 - 14014.622: 78.8536% ( 131) 00:09:44.301 14014.622 - 14115.446: 79.9772% ( 128) 00:09:44.301 14115.446 - 14216.271: 81.1710% ( 136) 00:09:44.301 14216.271 - 14317.095: 82.3473% ( 134) 00:09:44.301 14317.095 - 14417.920: 83.5499% ( 137) 00:09:44.301 14417.920 - 14518.745: 84.5593% ( 115) 00:09:44.301 14518.745 - 14619.569: 85.5337% ( 111) 00:09:44.301 14619.569 - 14720.394: 86.4905% ( 109) 00:09:44.301 14720.394 - 14821.218: 87.4034% ( 104) 00:09:44.301 14821.218 - 14922.043: 88.2988% ( 102) 00:09:44.301 14922.043 - 15022.868: 89.2381% ( 107) 00:09:44.301 15022.868 - 15123.692: 90.0544% ( 93) 00:09:44.301 15123.692 - 15224.517: 90.7479% ( 79) 00:09:44.301 15224.517 - 15325.342: 91.3448% ( 68) 00:09:44.301 15325.342 - 15426.166: 91.8452% ( 57) 00:09:44.301 15426.166 - 15526.991: 92.2226% ( 43) 00:09:44.301 15526.991 - 15627.815: 92.6440% ( 48) 00:09:44.301 15627.815 - 15728.640: 93.0214% ( 43) 00:09:44.301 15728.640 - 15829.465: 93.3550% ( 38) 00:09:44.301 15829.465 - 15930.289: 93.7237% ( 42) 00:09:44.301 15930.289 - 16031.114: 94.0485% ( 37) 00:09:44.301 16031.114 - 16131.938: 94.3996% ( 40) 00:09:44.301 16131.938 - 16232.763: 94.7156% ( 36) 00:09:44.301 16232.763 - 16333.588: 94.9965% ( 32) 00:09:44.301 16333.588 - 16434.412: 95.2511% ( 29) 00:09:44.301 16434.412 - 16535.237: 95.5144% ( 30) 00:09:44.301 16535.237 - 16636.062: 95.7602% ( 28) 00:09:44.301 16636.062 - 16736.886: 96.0235% ( 30) 00:09:44.301 16736.886 - 16837.711: 96.2430% ( 25) 00:09:44.301 16837.711 - 16938.535: 96.4712% ( 26) 00:09:44.301 16938.535 - 17039.360: 96.6907% ( 25) 00:09:44.301 17039.360 - 17140.185: 96.9013% ( 24) 00:09:44.301 17140.185 - 17241.009: 97.1208% ( 25) 00:09:44.301 17241.009 - 17341.834: 97.3315% ( 24) 00:09:44.301 17341.834 - 17442.658: 97.4982% ( 19) 00:09:44.301 17442.658 - 17543.483: 97.6738% ( 20) 00:09:44.301 17543.483 - 17644.308: 97.8494% ( 20) 00:09:44.301 17644.308 - 17745.132: 98.0249% ( 20) 00:09:44.301 17745.132 - 17845.957: 98.2005% ( 20) 00:09:44.301 17845.957 - 17946.782: 98.3673% ( 19) 00:09:44.301 17946.782 - 18047.606: 98.5341% ( 19) 00:09:44.301 18047.606 - 18148.431: 98.6570% ( 14) 00:09:44.301 18148.431 - 18249.255: 98.7623% ( 12) 00:09:44.301 18249.255 - 18350.080: 98.8501% ( 10) 00:09:44.301 18350.080 - 18450.905: 98.8764% ( 3) 00:09:44.301 31255.631 - 31457.280: 98.9642% ( 10) 00:09:44.301 31457.280 - 31658.929: 99.0607% ( 11) 00:09:44.301 31658.929 - 31860.578: 99.1134% ( 6) 00:09:44.301 31860.578 - 32062.228: 99.1749% ( 7) 00:09:44.301 32062.228 - 32263.877: 99.2100% ( 4) 00:09:44.301 32263.877 - 32465.526: 99.2539% ( 5) 00:09:44.301 32465.526 - 32667.175: 99.3065% ( 6) 00:09:44.301 32667.175 - 32868.825: 99.3592% ( 6) 00:09:44.301 32868.825 - 33070.474: 99.4119% ( 6) 00:09:44.301 33070.474 - 33272.123: 99.4645% ( 6) 00:09:44.301 33272.123 - 33473.772: 99.5172% ( 6) 00:09:44.301 33473.772 - 33675.422: 99.5699% ( 6) 00:09:44.301 33675.422 - 33877.071: 99.6225% ( 6) 00:09:44.301 33877.071 - 34078.720: 99.6752% ( 6) 00:09:44.301 34078.720 - 34280.369: 99.7279% ( 6) 00:09:44.301 34280.369 - 34482.018: 99.7805% ( 6) 00:09:44.301 34482.018 - 34683.668: 99.8420% ( 7) 00:09:44.301 34683.668 - 34885.317: 99.8947% ( 6) 00:09:44.301 34885.317 - 35086.966: 99.9473% ( 6) 00:09:44.301 35086.966 - 35288.615: 100.0000% ( 6) 00:09:44.301 00:09:44.301 14:04:47 -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:09:45.677 Initializing NVMe Controllers 00:09:45.677 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:45.677 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:45.677 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:45.677 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:45.677 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:09:45.677 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:09:45.677 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:09:45.677 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:09:45.677 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:09:45.677 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:09:45.677 Initialization complete. Launching workers. 00:09:45.677 ======================================================== 00:09:45.677 Latency(us) 00:09:45.677 Device Information : IOPS MiB/s Average min max 00:09:45.677 PCIE (0000:00:06.0) NSID 1 from core 0: 16710.68 195.83 7655.96 4878.62 30779.51 00:09:45.677 PCIE (0000:00:07.0) NSID 1 from core 0: 16710.68 195.83 7649.65 5397.96 29256.37 00:09:45.677 PCIE (0000:00:09.0) NSID 1 from core 0: 16710.68 195.83 7642.99 5348.42 28036.68 00:09:45.677 PCIE (0000:00:08.0) NSID 1 from core 0: 16710.68 195.83 7635.79 5373.71 26763.23 00:09:45.677 PCIE (0000:00:08.0) NSID 2 from core 0: 16710.68 195.83 7628.75 5299.00 25419.54 00:09:45.677 PCIE (0000:00:08.0) NSID 3 from core 0: 16838.24 197.32 7563.94 5196.54 17123.61 00:09:45.677 ======================================================== 00:09:45.677 Total : 100391.66 1176.46 7629.43 4878.62 30779.51 00:09:45.677 00:09:45.677 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:09:45.677 ================================================================================= 00:09:45.677 1.00000% : 5394.117us 00:09:45.677 10.00000% : 6024.271us 00:09:45.677 25.00000% : 6326.745us 00:09:45.677 50.00000% : 6856.074us 00:09:45.677 75.00000% : 7662.671us 00:09:45.677 90.00000% : 11292.357us 00:09:45.677 95.00000% : 12199.778us 00:09:45.677 98.00000% : 13208.025us 00:09:45.677 99.00000% : 13913.797us 00:09:45.677 99.50000% : 28634.191us 00:09:45.677 99.90000% : 30449.034us 00:09:45.677 99.99000% : 30852.332us 00:09:45.677 99.99900% : 30852.332us 00:09:45.677 99.99990% : 30852.332us 00:09:45.677 99.99999% : 30852.332us 00:09:45.677 00:09:45.677 Summary latency data for PCIE (0000:00:07.0) NSID 1 from core 0: 00:09:45.677 ================================================================================= 00:09:45.677 1.00000% : 5923.446us 00:09:45.677 10.00000% : 6175.508us 00:09:45.677 25.00000% : 6402.363us 00:09:45.677 50.00000% : 6805.662us 00:09:45.677 75.00000% : 7511.434us 00:09:45.677 90.00000% : 11241.945us 00:09:45.677 95.00000% : 11947.717us 00:09:45.677 98.00000% : 12552.665us 00:09:45.677 99.00000% : 13006.375us 00:09:45.677 99.50000% : 27222.646us 00:09:45.677 99.90000% : 29037.489us 00:09:45.677 99.99000% : 29239.138us 00:09:45.677 99.99900% : 29440.788us 00:09:45.677 99.99990% : 29440.788us 00:09:45.677 99.99999% : 29440.788us 00:09:45.677 00:09:45.677 Summary latency data for PCIE (0000:00:09.0) NSID 1 from core 0: 00:09:45.677 ================================================================================= 00:09:45.677 1.00000% : 5797.415us 00:09:45.677 10.00000% : 6150.302us 00:09:45.677 25.00000% : 6427.569us 00:09:45.677 50.00000% : 6805.662us 00:09:45.678 75.00000% : 7461.022us 00:09:45.678 90.00000% : 11342.769us 00:09:45.678 95.00000% : 12149.366us 00:09:45.678 98.00000% : 13006.375us 00:09:45.678 99.00000% : 13510.498us 00:09:45.678 99.50000% : 25710.277us 00:09:45.678 99.90000% : 27625.945us 00:09:45.678 99.99000% : 28029.243us 00:09:45.678 99.99900% : 28230.892us 00:09:45.678 99.99990% : 28230.892us 00:09:45.678 99.99999% : 28230.892us 00:09:45.678 00:09:45.678 Summary latency data for PCIE (0000:00:08.0) NSID 1 from core 0: 00:09:45.678 ================================================================================= 00:09:45.678 1.00000% : 5847.828us 00:09:45.678 10.00000% : 6175.508us 00:09:45.678 25.00000% : 6427.569us 00:09:45.678 50.00000% : 6805.662us 00:09:45.678 75.00000% : 7461.022us 00:09:45.678 90.00000% : 11191.532us 00:09:45.678 95.00000% : 12199.778us 00:09:45.678 98.00000% : 13812.972us 00:09:45.678 99.00000% : 15123.692us 00:09:45.678 99.50000% : 24399.557us 00:09:45.678 99.90000% : 26416.049us 00:09:45.678 99.99000% : 26819.348us 00:09:45.678 99.99900% : 26819.348us 00:09:45.678 99.99990% : 26819.348us 00:09:45.678 99.99999% : 26819.348us 00:09:45.678 00:09:45.678 Summary latency data for PCIE (0000:00:08.0) NSID 2 from core 0: 00:09:45.678 ================================================================================= 00:09:45.678 1.00000% : 5822.622us 00:09:45.678 10.00000% : 6175.508us 00:09:45.678 25.00000% : 6402.363us 00:09:45.678 50.00000% : 6805.662us 00:09:45.678 75.00000% : 7410.609us 00:09:45.678 90.00000% : 11292.357us 00:09:45.678 95.00000% : 12149.366us 00:09:45.678 98.00000% : 14821.218us 00:09:45.678 99.00000% : 15426.166us 00:09:45.678 99.50000% : 23088.837us 00:09:45.678 99.90000% : 25004.505us 00:09:45.678 99.99000% : 25407.803us 00:09:45.678 99.99900% : 25508.628us 00:09:45.678 99.99990% : 25508.628us 00:09:45.678 99.99999% : 25508.628us 00:09:45.678 00:09:45.678 Summary latency data for PCIE (0000:00:08.0) NSID 3 from core 0: 00:09:45.678 ================================================================================= 00:09:45.678 1.00000% : 5822.622us 00:09:45.678 10.00000% : 6175.508us 00:09:45.678 25.00000% : 6427.569us 00:09:45.678 50.00000% : 6805.662us 00:09:45.678 75.00000% : 7511.434us 00:09:45.678 90.00000% : 11241.945us 00:09:45.678 95.00000% : 12300.603us 00:09:45.678 98.00000% : 13913.797us 00:09:45.678 99.00000% : 14720.394us 00:09:45.678 99.50000% : 15526.991us 00:09:45.678 99.90000% : 16736.886us 00:09:45.678 99.99000% : 17140.185us 00:09:45.678 99.99900% : 17140.185us 00:09:45.678 99.99990% : 17140.185us 00:09:45.678 99.99999% : 17140.185us 00:09:45.678 00:09:45.678 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:09:45.678 ============================================================================== 00:09:45.678 Range in us Cumulative IO count 00:09:45.678 4864.788 - 4889.994: 0.0060% ( 1) 00:09:45.678 4889.994 - 4915.200: 0.0119% ( 1) 00:09:45.678 4915.200 - 4940.406: 0.0239% ( 2) 00:09:45.678 4940.406 - 4965.612: 0.0298% ( 1) 00:09:45.678 4965.612 - 4990.818: 0.0417% ( 2) 00:09:45.678 4990.818 - 5016.025: 0.0477% ( 1) 00:09:45.678 5016.025 - 5041.231: 0.0537% ( 1) 00:09:45.678 5041.231 - 5066.437: 0.0716% ( 3) 00:09:45.678 5066.437 - 5091.643: 0.0895% ( 3) 00:09:45.678 5091.643 - 5116.849: 0.1193% ( 5) 00:09:45.678 5116.849 - 5142.055: 0.1491% ( 5) 00:09:45.678 5142.055 - 5167.262: 0.2028% ( 9) 00:09:45.678 5167.262 - 5192.468: 0.2385% ( 6) 00:09:45.678 5192.468 - 5217.674: 0.3101% ( 12) 00:09:45.678 5217.674 - 5242.880: 0.3757% ( 11) 00:09:45.678 5242.880 - 5268.086: 0.4950% ( 20) 00:09:45.678 5268.086 - 5293.292: 0.6083% ( 19) 00:09:45.678 5293.292 - 5318.498: 0.7276% ( 20) 00:09:45.678 5318.498 - 5343.705: 0.8230% ( 16) 00:09:45.678 5343.705 - 5368.911: 0.9542% ( 22) 00:09:45.678 5368.911 - 5394.117: 1.0675% ( 19) 00:09:45.678 5394.117 - 5419.323: 1.2047% ( 23) 00:09:45.678 5419.323 - 5444.529: 1.3657% ( 27) 00:09:45.678 5444.529 - 5469.735: 1.5804% ( 36) 00:09:45.678 5469.735 - 5494.942: 1.7951% ( 36) 00:09:45.678 5494.942 - 5520.148: 1.9919% ( 33) 00:09:45.678 5520.148 - 5545.354: 2.2006% ( 35) 00:09:45.678 5545.354 - 5570.560: 2.3736% ( 29) 00:09:45.678 5570.560 - 5595.766: 2.5465% ( 29) 00:09:45.678 5595.766 - 5620.972: 2.7195% ( 29) 00:09:45.678 5620.972 - 5646.178: 2.8984% ( 30) 00:09:45.678 5646.178 - 5671.385: 3.0713% ( 29) 00:09:45.678 5671.385 - 5696.591: 3.3635% ( 49) 00:09:45.678 5696.591 - 5721.797: 3.6856% ( 54) 00:09:45.678 5721.797 - 5747.003: 4.1865% ( 84) 00:09:45.678 5747.003 - 5772.209: 5.0036% ( 137) 00:09:45.678 5772.209 - 5797.415: 5.6775% ( 113) 00:09:45.678 5797.415 - 5822.622: 6.3812% ( 118) 00:09:45.678 5822.622 - 5847.828: 6.9835% ( 101) 00:09:45.678 5847.828 - 5873.034: 7.3056% ( 54) 00:09:45.678 5873.034 - 5898.240: 7.6574% ( 59) 00:09:45.678 5898.240 - 5923.446: 7.9616% ( 51) 00:09:45.678 5923.446 - 5948.652: 8.3373% ( 63) 00:09:45.678 5948.652 - 5973.858: 8.8919% ( 93) 00:09:45.678 5973.858 - 5999.065: 9.3452% ( 76) 00:09:45.678 5999.065 - 6024.271: 10.0370% ( 116) 00:09:45.678 6024.271 - 6049.477: 10.6691% ( 106) 00:09:45.678 6049.477 - 6074.683: 11.6591% ( 166) 00:09:45.678 6074.683 - 6099.889: 12.7147% ( 177) 00:09:45.678 6099.889 - 6125.095: 13.8478% ( 190) 00:09:45.678 6125.095 - 6150.302: 14.9332% ( 182) 00:09:45.678 6150.302 - 6175.508: 16.2870% ( 227) 00:09:45.678 6175.508 - 6200.714: 17.5692% ( 215) 00:09:45.678 6200.714 - 6225.920: 18.7440% ( 197) 00:09:45.678 6225.920 - 6251.126: 20.3781% ( 274) 00:09:45.678 6251.126 - 6276.332: 22.0718% ( 284) 00:09:45.678 6276.332 - 6301.538: 23.4852% ( 237) 00:09:45.678 6301.538 - 6326.745: 25.1252% ( 275) 00:09:45.678 6326.745 - 6351.951: 26.5685% ( 242) 00:09:45.678 6351.951 - 6377.157: 27.8686% ( 218) 00:09:45.678 6377.157 - 6402.363: 29.2581% ( 233) 00:09:45.678 6402.363 - 6427.569: 30.4926% ( 207) 00:09:45.678 6427.569 - 6452.775: 31.7390% ( 209) 00:09:45.678 6452.775 - 6503.188: 34.5360% ( 469) 00:09:45.678 6503.188 - 6553.600: 37.2018% ( 447) 00:09:45.678 6553.600 - 6604.012: 39.7841% ( 433) 00:09:45.678 6604.012 - 6654.425: 42.3247% ( 426) 00:09:45.678 6654.425 - 6704.837: 44.7579% ( 408) 00:09:45.678 6704.837 - 6755.249: 47.3223% ( 430) 00:09:45.678 6755.249 - 6805.662: 49.7913% ( 414) 00:09:45.678 6805.662 - 6856.074: 52.2066% ( 405) 00:09:45.678 6856.074 - 6906.486: 53.9241% ( 288) 00:09:45.678 6906.486 - 6956.898: 55.8802% ( 328) 00:09:45.678 6956.898 - 7007.311: 57.8423% ( 329) 00:09:45.678 7007.311 - 7057.723: 59.6851% ( 309) 00:09:45.678 7057.723 - 7108.135: 61.6830% ( 335) 00:09:45.678 7108.135 - 7158.548: 63.0069% ( 222) 00:09:45.678 7158.548 - 7208.960: 64.6768% ( 280) 00:09:45.678 7208.960 - 7259.372: 66.4420% ( 296) 00:09:45.678 7259.372 - 7309.785: 68.2013% ( 295) 00:09:45.678 7309.785 - 7360.197: 69.5074% ( 219) 00:09:45.678 7360.197 - 7410.609: 70.6345% ( 189) 00:09:45.678 7410.609 - 7461.022: 71.7677% ( 190) 00:09:45.678 7461.022 - 7511.434: 72.8292% ( 178) 00:09:45.678 7511.434 - 7561.846: 73.9623% ( 190) 00:09:45.678 7561.846 - 7612.258: 74.8569% ( 150) 00:09:45.678 7612.258 - 7662.671: 75.5785% ( 121) 00:09:45.678 7662.671 - 7713.083: 76.2464% ( 112) 00:09:45.678 7713.083 - 7763.495: 76.9621% ( 120) 00:09:45.678 7763.495 - 7813.908: 77.5286% ( 95) 00:09:45.678 7813.908 - 7864.320: 78.0713% ( 91) 00:09:45.678 7864.320 - 7914.732: 78.6379% ( 95) 00:09:45.678 7914.732 - 7965.145: 79.1567% ( 87) 00:09:45.678 7965.145 - 8015.557: 79.5265% ( 62) 00:09:45.678 8015.557 - 8065.969: 79.8783% ( 59) 00:09:45.678 8065.969 - 8116.382: 80.1407% ( 44) 00:09:45.678 8116.382 - 8166.794: 80.3435% ( 34) 00:09:45.678 8166.794 - 8217.206: 80.5463% ( 34) 00:09:45.678 8217.206 - 8267.618: 80.6954% ( 25) 00:09:45.678 8267.618 - 8318.031: 80.8624% ( 28) 00:09:45.678 8318.031 - 8368.443: 80.9816% ( 20) 00:09:45.678 8368.443 - 8418.855: 81.1486% ( 28) 00:09:45.678 8418.855 - 8469.268: 81.4170% ( 45) 00:09:45.678 8469.268 - 8519.680: 81.6496% ( 39) 00:09:45.678 8519.680 - 8570.092: 81.7510% ( 17) 00:09:45.678 8570.092 - 8620.505: 81.8941% ( 24) 00:09:45.678 8620.505 - 8670.917: 82.0312% ( 23) 00:09:45.678 8670.917 - 8721.329: 82.1326% ( 17) 00:09:45.678 8721.329 - 8771.742: 82.2638% ( 22) 00:09:45.678 8771.742 - 8822.154: 82.3652% ( 17) 00:09:45.678 8822.154 - 8872.566: 82.4427% ( 13) 00:09:45.678 8872.566 - 8922.978: 82.5561% ( 19) 00:09:45.678 8922.978 - 8973.391: 82.6455% ( 15) 00:09:45.678 8973.391 - 9023.803: 82.7529% ( 18) 00:09:45.678 9023.803 - 9074.215: 82.8781% ( 21) 00:09:45.678 9074.215 - 9124.628: 82.9497% ( 12) 00:09:45.678 9124.628 - 9175.040: 83.0749% ( 21) 00:09:45.678 9175.040 - 9225.452: 83.1703% ( 16) 00:09:45.678 9225.452 - 9275.865: 83.2300% ( 10) 00:09:45.678 9275.865 - 9326.277: 83.3194% ( 15) 00:09:45.678 9326.277 - 9376.689: 83.3910% ( 12) 00:09:45.678 9376.689 - 9427.102: 83.4745% ( 14) 00:09:45.678 9427.102 - 9477.514: 83.5938% ( 20) 00:09:45.678 9477.514 - 9527.926: 83.7130% ( 20) 00:09:45.678 9527.926 - 9578.338: 83.8204% ( 18) 00:09:45.678 9578.338 - 9628.751: 84.0172% ( 33) 00:09:45.678 9628.751 - 9679.163: 84.1782% ( 27) 00:09:45.678 9679.163 - 9729.575: 84.3154% ( 23) 00:09:45.678 9729.575 - 9779.988: 84.4883% ( 29) 00:09:45.678 9779.988 - 9830.400: 84.5658% ( 13) 00:09:45.678 9830.400 - 9880.812: 84.7030% ( 23) 00:09:45.678 9880.812 - 9931.225: 84.8521% ( 25) 00:09:45.679 9931.225 - 9981.637: 85.0310% ( 30) 00:09:45.679 9981.637 - 10032.049: 85.2040% ( 29) 00:09:45.679 10032.049 - 10082.462: 85.3113% ( 18) 00:09:45.679 10082.462 - 10132.874: 85.4962% ( 31) 00:09:45.679 10132.874 - 10183.286: 85.6214% ( 21) 00:09:45.679 10183.286 - 10233.698: 85.7944% ( 29) 00:09:45.679 10233.698 - 10284.111: 85.9196% ( 21) 00:09:45.679 10284.111 - 10334.523: 86.1164% ( 33) 00:09:45.679 10334.523 - 10384.935: 86.3669% ( 42) 00:09:45.679 10384.935 - 10435.348: 86.5697% ( 34) 00:09:45.679 10435.348 - 10485.760: 86.7724% ( 34) 00:09:45.679 10485.760 - 10536.172: 87.0169% ( 41) 00:09:45.679 10536.172 - 10586.585: 87.2316% ( 36) 00:09:45.679 10586.585 - 10636.997: 87.4761% ( 41) 00:09:45.679 10636.997 - 10687.409: 87.6670% ( 32) 00:09:45.679 10687.409 - 10737.822: 87.8459% ( 30) 00:09:45.679 10737.822 - 10788.234: 87.9652% ( 20) 00:09:45.679 10788.234 - 10838.646: 88.1500% ( 31) 00:09:45.679 10838.646 - 10889.058: 88.3111% ( 27) 00:09:45.679 10889.058 - 10939.471: 88.5138% ( 34) 00:09:45.679 10939.471 - 10989.883: 88.7166% ( 34) 00:09:45.679 10989.883 - 11040.295: 88.9790% ( 44) 00:09:45.679 11040.295 - 11090.708: 89.1460% ( 28) 00:09:45.679 11090.708 - 11141.120: 89.3726% ( 38) 00:09:45.679 11141.120 - 11191.532: 89.5515% ( 30) 00:09:45.679 11191.532 - 11241.945: 89.8259% ( 46) 00:09:45.679 11241.945 - 11292.357: 90.0584% ( 39) 00:09:45.679 11292.357 - 11342.769: 90.3566% ( 50) 00:09:45.679 11342.769 - 11393.182: 90.8099% ( 76) 00:09:45.679 11393.182 - 11443.594: 91.1737% ( 61) 00:09:45.679 11443.594 - 11494.006: 91.5196% ( 58) 00:09:45.679 11494.006 - 11544.418: 91.7999% ( 47) 00:09:45.679 11544.418 - 11594.831: 92.1219% ( 54) 00:09:45.679 11594.831 - 11645.243: 92.4559% ( 56) 00:09:45.679 11645.243 - 11695.655: 92.7362% ( 47) 00:09:45.679 11695.655 - 11746.068: 92.9449% ( 35) 00:09:45.679 11746.068 - 11796.480: 93.1596% ( 36) 00:09:45.679 11796.480 - 11846.892: 93.4876% ( 55) 00:09:45.679 11846.892 - 11897.305: 93.7440% ( 43) 00:09:45.679 11897.305 - 11947.717: 93.9110% ( 28) 00:09:45.679 11947.717 - 11998.129: 94.1436% ( 39) 00:09:45.679 11998.129 - 12048.542: 94.3702% ( 38) 00:09:45.679 12048.542 - 12098.954: 94.6028% ( 39) 00:09:45.679 12098.954 - 12149.366: 94.7996% ( 33) 00:09:45.679 12149.366 - 12199.778: 95.0143% ( 36) 00:09:45.679 12199.778 - 12250.191: 95.2052% ( 32) 00:09:45.679 12250.191 - 12300.603: 95.4258% ( 37) 00:09:45.679 12300.603 - 12351.015: 95.5988% ( 29) 00:09:45.679 12351.015 - 12401.428: 95.7359% ( 23) 00:09:45.679 12401.428 - 12451.840: 95.9387% ( 34) 00:09:45.679 12451.840 - 12502.252: 96.1713% ( 39) 00:09:45.679 12502.252 - 12552.665: 96.3800% ( 35) 00:09:45.679 12552.665 - 12603.077: 96.5231% ( 24) 00:09:45.679 12603.077 - 12653.489: 96.6842% ( 27) 00:09:45.679 12653.489 - 12703.902: 96.8094% ( 21) 00:09:45.679 12703.902 - 12754.314: 96.9525% ( 24) 00:09:45.679 12754.314 - 12804.726: 97.1314% ( 30) 00:09:45.679 12804.726 - 12855.138: 97.2388% ( 18) 00:09:45.679 12855.138 - 12905.551: 97.3700% ( 22) 00:09:45.679 12905.551 - 13006.375: 97.6741% ( 51) 00:09:45.679 13006.375 - 13107.200: 97.9664% ( 49) 00:09:45.679 13107.200 - 13208.025: 98.2168% ( 42) 00:09:45.679 13208.025 - 13308.849: 98.4077% ( 32) 00:09:45.679 13308.849 - 13409.674: 98.5568% ( 25) 00:09:45.679 13409.674 - 13510.498: 98.6999% ( 24) 00:09:45.679 13510.498 - 13611.323: 98.8311% ( 22) 00:09:45.679 13611.323 - 13712.148: 98.9027% ( 12) 00:09:45.679 13712.148 - 13812.972: 98.9683% ( 11) 00:09:45.679 13812.972 - 13913.797: 99.0398% ( 12) 00:09:45.679 13913.797 - 14014.622: 99.0875% ( 8) 00:09:45.679 14014.622 - 14115.446: 99.1233% ( 6) 00:09:45.679 14115.446 - 14216.271: 99.1531% ( 5) 00:09:45.679 14216.271 - 14317.095: 99.2068% ( 9) 00:09:45.679 14317.095 - 14417.920: 99.2188% ( 2) 00:09:45.679 14417.920 - 14518.745: 99.2247% ( 1) 00:09:45.679 14518.745 - 14619.569: 99.2366% ( 2) 00:09:45.679 27020.997 - 27222.646: 99.2605% ( 4) 00:09:45.679 27222.646 - 27424.295: 99.3022% ( 7) 00:09:45.679 27424.295 - 27625.945: 99.3380% ( 6) 00:09:45.679 27625.945 - 27827.594: 99.3798% ( 7) 00:09:45.679 27827.594 - 28029.243: 99.4156% ( 6) 00:09:45.679 28029.243 - 28230.892: 99.4513% ( 6) 00:09:45.679 28230.892 - 28432.542: 99.4990% ( 8) 00:09:45.679 28432.542 - 28634.191: 99.5348% ( 6) 00:09:45.679 28634.191 - 28835.840: 99.5825% ( 8) 00:09:45.679 28835.840 - 29037.489: 99.6243% ( 7) 00:09:45.679 29037.489 - 29239.138: 99.6660% ( 7) 00:09:45.679 29239.138 - 29440.788: 99.7137% ( 8) 00:09:45.679 29440.788 - 29642.437: 99.7555% ( 7) 00:09:45.679 29642.437 - 29844.086: 99.7972% ( 7) 00:09:45.679 29844.086 - 30045.735: 99.8449% ( 8) 00:09:45.679 30045.735 - 30247.385: 99.8867% ( 7) 00:09:45.679 30247.385 - 30449.034: 99.9344% ( 8) 00:09:45.679 30449.034 - 30650.683: 99.9761% ( 7) 00:09:45.679 30650.683 - 30852.332: 100.0000% ( 4) 00:09:45.679 00:09:45.679 Latency histogram for PCIE (0000:00:07.0) NSID 1 from core 0: 00:09:45.679 ============================================================================== 00:09:45.679 Range in us Cumulative IO count 00:09:45.679 5394.117 - 5419.323: 0.0060% ( 1) 00:09:45.679 5444.529 - 5469.735: 0.0119% ( 1) 00:09:45.679 5469.735 - 5494.942: 0.0179% ( 1) 00:09:45.679 5545.354 - 5570.560: 0.0239% ( 1) 00:09:45.679 5595.766 - 5620.972: 0.0298% ( 1) 00:09:45.679 5620.972 - 5646.178: 0.0358% ( 1) 00:09:45.679 5646.178 - 5671.385: 0.0537% ( 3) 00:09:45.679 5671.385 - 5696.591: 0.0596% ( 1) 00:09:45.679 5721.797 - 5747.003: 0.0954% ( 6) 00:09:45.679 5747.003 - 5772.209: 0.1073% ( 2) 00:09:45.679 5772.209 - 5797.415: 0.1610% ( 9) 00:09:45.679 5797.415 - 5822.622: 0.2326% ( 12) 00:09:45.679 5822.622 - 5847.828: 0.4055% ( 29) 00:09:45.679 5847.828 - 5873.034: 0.6262% ( 37) 00:09:45.679 5873.034 - 5898.240: 0.8349% ( 35) 00:09:45.679 5898.240 - 5923.446: 1.1033% ( 45) 00:09:45.679 5923.446 - 5948.652: 1.4611% ( 60) 00:09:45.679 5948.652 - 5973.858: 2.1648% ( 118) 00:09:45.679 5973.858 - 5999.065: 2.5823% ( 70) 00:09:45.679 5999.065 - 6024.271: 3.1369% ( 93) 00:09:45.679 6024.271 - 6049.477: 4.1985% ( 178) 00:09:45.679 6049.477 - 6074.683: 5.2779% ( 181) 00:09:45.679 6074.683 - 6099.889: 6.2619% ( 165) 00:09:45.679 6099.889 - 6125.095: 7.7886% ( 256) 00:09:45.679 6125.095 - 6150.302: 8.9277% ( 191) 00:09:45.679 6150.302 - 6175.508: 10.9733% ( 343) 00:09:45.679 6175.508 - 6200.714: 12.6908% ( 288) 00:09:45.679 6200.714 - 6225.920: 14.6708% ( 332) 00:09:45.679 6225.920 - 6251.126: 15.8934% ( 205) 00:09:45.679 6251.126 - 6276.332: 18.2848% ( 401) 00:09:45.679 6276.332 - 6301.538: 19.8414% ( 261) 00:09:45.679 6301.538 - 6326.745: 21.8511% ( 337) 00:09:45.679 6326.745 - 6351.951: 23.1811% ( 223) 00:09:45.679 6351.951 - 6377.157: 24.5945% ( 237) 00:09:45.679 6377.157 - 6402.363: 26.0496% ( 244) 00:09:45.679 6402.363 - 6427.569: 27.9163% ( 313) 00:09:45.679 6427.569 - 6452.775: 29.8962% ( 332) 00:09:45.679 6452.775 - 6503.188: 33.6474% ( 629) 00:09:45.679 6503.188 - 6553.600: 36.5637% ( 489) 00:09:45.679 6553.600 - 6604.012: 40.1419% ( 600) 00:09:45.679 6604.012 - 6654.425: 43.0284% ( 484) 00:09:45.679 6654.425 - 6704.837: 45.4795% ( 411) 00:09:45.679 6704.837 - 6755.249: 48.2884% ( 471) 00:09:45.679 6755.249 - 6805.662: 51.0794% ( 468) 00:09:45.679 6805.662 - 6856.074: 54.7710% ( 619) 00:09:45.679 6856.074 - 6906.486: 57.8364% ( 514) 00:09:45.679 6906.486 - 6956.898: 60.5141% ( 449) 00:09:45.679 6956.898 - 7007.311: 62.4284% ( 321) 00:09:45.679 7007.311 - 7057.723: 64.7364% ( 387) 00:09:45.679 7057.723 - 7108.135: 66.3287% ( 267) 00:09:45.679 7108.135 - 7158.548: 67.9866% ( 278) 00:09:45.679 7158.548 - 7208.960: 69.4776% ( 250) 00:09:45.679 7208.960 - 7259.372: 71.0878% ( 270) 00:09:45.679 7259.372 - 7309.785: 72.1613% ( 180) 00:09:45.679 7309.785 - 7360.197: 73.1214% ( 161) 00:09:45.679 7360.197 - 7410.609: 74.0995% ( 164) 00:09:45.679 7410.609 - 7461.022: 74.9225% ( 138) 00:09:45.679 7461.022 - 7511.434: 75.8230% ( 151) 00:09:45.679 7511.434 - 7561.846: 76.4909% ( 112) 00:09:45.679 7561.846 - 7612.258: 77.1708% ( 114) 00:09:45.679 7612.258 - 7662.671: 77.6002% ( 72) 00:09:45.679 7662.671 - 7713.083: 77.9401% ( 57) 00:09:45.679 7713.083 - 7763.495: 78.1906% ( 42) 00:09:45.679 7763.495 - 7813.908: 78.4053% ( 36) 00:09:45.679 7813.908 - 7864.320: 78.5842% ( 30) 00:09:45.679 7864.320 - 7914.732: 78.7870% ( 34) 00:09:45.679 7914.732 - 7965.145: 78.9659% ( 30) 00:09:45.679 7965.145 - 8015.557: 79.2760% ( 52) 00:09:45.679 8015.557 - 8065.969: 79.5324% ( 43) 00:09:45.679 8065.969 - 8116.382: 79.6875% ( 26) 00:09:45.679 8116.382 - 8166.794: 79.8187% ( 22) 00:09:45.679 8166.794 - 8217.206: 79.9439% ( 21) 00:09:45.679 8217.206 - 8267.618: 80.0513% ( 18) 00:09:45.679 8267.618 - 8318.031: 80.1885% ( 23) 00:09:45.679 8318.031 - 8368.443: 80.3972% ( 35) 00:09:45.679 8368.443 - 8418.855: 80.6656% ( 45) 00:09:45.679 8418.855 - 8469.268: 81.0472% ( 64) 00:09:45.679 8469.268 - 8519.680: 81.2679% ( 37) 00:09:45.679 8519.680 - 8570.092: 81.3633% ( 16) 00:09:45.679 8570.092 - 8620.505: 81.4766% ( 19) 00:09:45.679 8620.505 - 8670.917: 81.5482% ( 12) 00:09:45.679 8670.917 - 8721.329: 81.6317% ( 14) 00:09:45.679 8721.329 - 8771.742: 81.7211% ( 15) 00:09:45.679 8771.742 - 8822.154: 81.7808% ( 10) 00:09:45.679 8822.154 - 8872.566: 81.8464% ( 11) 00:09:45.679 8872.566 - 8922.978: 81.9120% ( 11) 00:09:45.679 8922.978 - 8973.391: 81.9716% ( 10) 00:09:45.680 8973.391 - 9023.803: 82.0432% ( 12) 00:09:45.680 9023.803 - 9074.215: 82.0969% ( 9) 00:09:45.680 9074.215 - 9124.628: 82.1505% ( 9) 00:09:45.680 9124.628 - 9175.040: 82.1982% ( 8) 00:09:45.680 9175.040 - 9225.452: 82.2221% ( 4) 00:09:45.680 9225.452 - 9275.865: 82.2579% ( 6) 00:09:45.680 9275.865 - 9326.277: 82.3175% ( 10) 00:09:45.680 9326.277 - 9376.689: 82.3771% ( 10) 00:09:45.680 9376.689 - 9427.102: 82.4487% ( 12) 00:09:45.680 9427.102 - 9477.514: 82.5382% ( 15) 00:09:45.680 9477.514 - 9527.926: 82.7290% ( 32) 00:09:45.680 9527.926 - 9578.338: 82.8662% ( 23) 00:09:45.680 9578.338 - 9628.751: 82.9616% ( 16) 00:09:45.680 9628.751 - 9679.163: 83.0809% ( 20) 00:09:45.680 9679.163 - 9729.575: 83.1763% ( 16) 00:09:45.680 9729.575 - 9779.988: 83.3015% ( 21) 00:09:45.680 9779.988 - 9830.400: 83.5401% ( 40) 00:09:45.680 9830.400 - 9880.812: 83.6594% ( 20) 00:09:45.680 9880.812 - 9931.225: 83.8323% ( 29) 00:09:45.680 9931.225 - 9981.637: 83.9874% ( 26) 00:09:45.680 9981.637 - 10032.049: 84.2021% ( 36) 00:09:45.680 10032.049 - 10082.462: 84.3392% ( 23) 00:09:45.680 10082.462 - 10132.874: 84.5122% ( 29) 00:09:45.680 10132.874 - 10183.286: 84.6493% ( 23) 00:09:45.680 10183.286 - 10233.698: 84.7925% ( 24) 00:09:45.680 10233.698 - 10284.111: 84.9535% ( 27) 00:09:45.680 10284.111 - 10334.523: 85.1145% ( 27) 00:09:45.680 10334.523 - 10384.935: 85.2457% ( 22) 00:09:45.680 10384.935 - 10435.348: 85.4783% ( 39) 00:09:45.680 10435.348 - 10485.760: 85.7228% ( 41) 00:09:45.680 10485.760 - 10536.172: 85.9733% ( 42) 00:09:45.680 10536.172 - 10586.585: 86.3132% ( 57) 00:09:45.680 10586.585 - 10636.997: 86.5935% ( 47) 00:09:45.680 10636.997 - 10687.409: 86.8678% ( 46) 00:09:45.680 10687.409 - 10737.822: 87.1064% ( 40) 00:09:45.680 10737.822 - 10788.234: 87.3986% ( 49) 00:09:45.680 10788.234 - 10838.646: 87.6789% ( 47) 00:09:45.680 10838.646 - 10889.058: 88.0188% ( 57) 00:09:45.680 10889.058 - 10939.471: 88.3469% ( 55) 00:09:45.680 10939.471 - 10989.883: 88.7345% ( 65) 00:09:45.680 10989.883 - 11040.295: 89.0029% ( 45) 00:09:45.680 11040.295 - 11090.708: 89.2772% ( 46) 00:09:45.680 11090.708 - 11141.120: 89.5157% ( 40) 00:09:45.680 11141.120 - 11191.532: 89.8139% ( 50) 00:09:45.680 11191.532 - 11241.945: 90.0942% ( 47) 00:09:45.680 11241.945 - 11292.357: 90.4580% ( 61) 00:09:45.680 11292.357 - 11342.769: 90.8814% ( 71) 00:09:45.680 11342.769 - 11393.182: 91.2870% ( 68) 00:09:45.680 11393.182 - 11443.594: 91.8655% ( 97) 00:09:45.680 11443.594 - 11494.006: 92.2591% ( 66) 00:09:45.680 11494.006 - 11544.418: 92.5751% ( 53) 00:09:45.680 11544.418 - 11594.831: 92.8853% ( 52) 00:09:45.680 11594.831 - 11645.243: 93.2252% ( 57) 00:09:45.680 11645.243 - 11695.655: 93.6128% ( 65) 00:09:45.680 11695.655 - 11746.068: 94.0124% ( 67) 00:09:45.680 11746.068 - 11796.480: 94.3106% ( 50) 00:09:45.680 11796.480 - 11846.892: 94.6744% ( 61) 00:09:45.680 11846.892 - 11897.305: 94.9905% ( 53) 00:09:45.680 11897.305 - 11947.717: 95.2767% ( 48) 00:09:45.680 11947.717 - 11998.129: 95.6286% ( 59) 00:09:45.680 11998.129 - 12048.542: 95.9089% ( 47) 00:09:45.680 12048.542 - 12098.954: 96.1832% ( 46) 00:09:45.680 12098.954 - 12149.366: 96.4635% ( 47) 00:09:45.680 12149.366 - 12199.778: 96.6901% ( 38) 00:09:45.680 12199.778 - 12250.191: 96.8929% ( 34) 00:09:45.680 12250.191 - 12300.603: 97.1911% ( 50) 00:09:45.680 12300.603 - 12351.015: 97.3819% ( 32) 00:09:45.680 12351.015 - 12401.428: 97.5489% ( 28) 00:09:45.680 12401.428 - 12451.840: 97.7040% ( 26) 00:09:45.680 12451.840 - 12502.252: 97.8650% ( 27) 00:09:45.680 12502.252 - 12552.665: 98.0260% ( 27) 00:09:45.680 12552.665 - 12603.077: 98.1870% ( 27) 00:09:45.680 12603.077 - 12653.489: 98.3361% ( 25) 00:09:45.680 12653.489 - 12703.902: 98.4912% ( 26) 00:09:45.680 12703.902 - 12754.314: 98.6164% ( 21) 00:09:45.680 12754.314 - 12804.726: 98.7238% ( 18) 00:09:45.680 12804.726 - 12855.138: 98.8490% ( 21) 00:09:45.680 12855.138 - 12905.551: 98.9623% ( 19) 00:09:45.680 12905.551 - 13006.375: 99.1412% ( 30) 00:09:45.680 13006.375 - 13107.200: 99.2247% ( 14) 00:09:45.680 13107.200 - 13208.025: 99.2366% ( 2) 00:09:45.680 25811.102 - 26012.751: 99.2665% ( 5) 00:09:45.680 26012.751 - 26214.400: 99.3082% ( 7) 00:09:45.680 26214.400 - 26416.049: 99.3559% ( 8) 00:09:45.680 26416.049 - 26617.698: 99.4036% ( 8) 00:09:45.680 26617.698 - 26819.348: 99.4454% ( 7) 00:09:45.680 26819.348 - 27020.997: 99.4931% ( 8) 00:09:45.680 27020.997 - 27222.646: 99.5408% ( 8) 00:09:45.680 27222.646 - 27424.295: 99.5825% ( 7) 00:09:45.680 27424.295 - 27625.945: 99.6243% ( 7) 00:09:45.680 27625.945 - 27827.594: 99.6660% ( 7) 00:09:45.680 27827.594 - 28029.243: 99.7137% ( 8) 00:09:45.680 28029.243 - 28230.892: 99.7615% ( 8) 00:09:45.680 28230.892 - 28432.542: 99.8092% ( 8) 00:09:45.680 28432.542 - 28634.191: 99.8569% ( 8) 00:09:45.680 28634.191 - 28835.840: 99.8986% ( 7) 00:09:45.680 28835.840 - 29037.489: 99.9463% ( 8) 00:09:45.680 29037.489 - 29239.138: 99.9940% ( 8) 00:09:45.680 29239.138 - 29440.788: 100.0000% ( 1) 00:09:45.680 00:09:45.680 Latency histogram for PCIE (0000:00:09.0) NSID 1 from core 0: 00:09:45.680 ============================================================================== 00:09:45.680 Range in us Cumulative IO count 00:09:45.680 5343.705 - 5368.911: 0.0060% ( 1) 00:09:45.680 5368.911 - 5394.117: 0.0119% ( 1) 00:09:45.680 5394.117 - 5419.323: 0.0179% ( 1) 00:09:45.680 5419.323 - 5444.529: 0.0358% ( 3) 00:09:45.680 5444.529 - 5469.735: 0.0417% ( 1) 00:09:45.680 5469.735 - 5494.942: 0.0477% ( 1) 00:09:45.680 5545.354 - 5570.560: 0.0835% ( 6) 00:09:45.680 5570.560 - 5595.766: 0.1014% ( 3) 00:09:45.680 5595.766 - 5620.972: 0.1312% ( 5) 00:09:45.680 5620.972 - 5646.178: 0.1789% ( 8) 00:09:45.680 5646.178 - 5671.385: 0.2863% ( 18) 00:09:45.680 5671.385 - 5696.591: 0.4175% ( 22) 00:09:45.680 5696.591 - 5721.797: 0.5367% ( 20) 00:09:45.680 5721.797 - 5747.003: 0.7156% ( 30) 00:09:45.680 5747.003 - 5772.209: 0.8826% ( 28) 00:09:45.680 5772.209 - 5797.415: 1.1093% ( 38) 00:09:45.680 5797.415 - 5822.622: 1.3538% ( 41) 00:09:45.680 5822.622 - 5847.828: 1.6877% ( 56) 00:09:45.680 5847.828 - 5873.034: 2.0456% ( 60) 00:09:45.680 5873.034 - 5898.240: 2.4272% ( 64) 00:09:45.680 5898.240 - 5923.446: 2.9103% ( 81) 00:09:45.680 5923.446 - 5948.652: 3.3755% ( 78) 00:09:45.680 5948.652 - 5973.858: 3.9480% ( 96) 00:09:45.680 5973.858 - 5999.065: 4.5682% ( 104) 00:09:45.680 5999.065 - 6024.271: 5.2362% ( 112) 00:09:45.680 6024.271 - 6049.477: 6.2858% ( 176) 00:09:45.680 6049.477 - 6074.683: 7.1028% ( 137) 00:09:45.680 6074.683 - 6099.889: 8.0570% ( 160) 00:09:45.680 6099.889 - 6125.095: 9.0708% ( 170) 00:09:45.680 6125.095 - 6150.302: 10.2338% ( 195) 00:09:45.680 6150.302 - 6175.508: 11.8261% ( 267) 00:09:45.680 6175.508 - 6200.714: 12.8519% ( 172) 00:09:45.680 6200.714 - 6225.920: 13.9253% ( 180) 00:09:45.680 6225.920 - 6251.126: 15.2135% ( 216) 00:09:45.680 6251.126 - 6276.332: 16.4301% ( 204) 00:09:45.680 6276.332 - 6301.538: 17.6407% ( 203) 00:09:45.680 6301.538 - 6326.745: 19.2032% ( 262) 00:09:45.680 6326.745 - 6351.951: 20.6644% ( 245) 00:09:45.680 6351.951 - 6377.157: 22.3700% ( 286) 00:09:45.680 6377.157 - 6402.363: 24.0995% ( 290) 00:09:45.680 6402.363 - 6427.569: 26.1391% ( 342) 00:09:45.680 6427.569 - 6452.775: 28.1071% ( 330) 00:09:45.680 6452.775 - 6503.188: 31.5184% ( 572) 00:09:45.680 6503.188 - 6553.600: 34.3511% ( 475) 00:09:45.680 6553.600 - 6604.012: 37.3986% ( 511) 00:09:45.680 6604.012 - 6654.425: 40.1002% ( 453) 00:09:45.680 6654.425 - 6704.837: 44.0840% ( 668) 00:09:45.680 6704.837 - 6755.249: 47.7755% ( 619) 00:09:45.680 6755.249 - 6805.662: 51.3955% ( 607) 00:09:45.680 6805.662 - 6856.074: 54.2760% ( 483) 00:09:45.680 6856.074 - 6906.486: 57.3414% ( 514) 00:09:45.680 6906.486 - 6956.898: 59.8223% ( 416) 00:09:45.680 6956.898 - 7007.311: 62.1780% ( 395) 00:09:45.680 7007.311 - 7057.723: 64.1997% ( 339) 00:09:45.680 7057.723 - 7108.135: 65.9769% ( 298) 00:09:45.680 7108.135 - 7158.548: 67.5394% ( 262) 00:09:45.680 7158.548 - 7208.960: 69.5670% ( 340) 00:09:45.680 7208.960 - 7259.372: 70.9268% ( 228) 00:09:45.680 7259.372 - 7309.785: 72.6384% ( 287) 00:09:45.680 7309.785 - 7360.197: 73.5627% ( 155) 00:09:45.680 7360.197 - 7410.609: 74.4334% ( 146) 00:09:45.680 7410.609 - 7461.022: 75.2028% ( 129) 00:09:45.680 7461.022 - 7511.434: 75.8469% ( 108) 00:09:45.680 7511.434 - 7561.846: 76.4909% ( 108) 00:09:45.680 7561.846 - 7612.258: 77.0635% ( 96) 00:09:45.680 7612.258 - 7662.671: 77.6718% ( 102) 00:09:45.680 7662.671 - 7713.083: 78.1846% ( 86) 00:09:45.680 7713.083 - 7763.495: 78.6319% ( 75) 00:09:45.680 7763.495 - 7813.908: 79.0017% ( 62) 00:09:45.680 7813.908 - 7864.320: 79.3297% ( 55) 00:09:45.680 7864.320 - 7914.732: 79.6458% ( 53) 00:09:45.680 7914.732 - 7965.145: 80.1229% ( 80) 00:09:45.680 7965.145 - 8015.557: 80.3435% ( 37) 00:09:45.680 8015.557 - 8065.969: 80.5224% ( 30) 00:09:45.680 8065.969 - 8116.382: 80.6596% ( 23) 00:09:45.680 8116.382 - 8166.794: 80.7610% ( 17) 00:09:45.680 8166.794 - 8217.206: 80.8743% ( 19) 00:09:45.680 8217.206 - 8267.618: 81.0413% ( 28) 00:09:45.680 8267.618 - 8318.031: 81.3335% ( 49) 00:09:45.680 8318.031 - 8368.443: 81.5005% ( 28) 00:09:45.680 8368.443 - 8418.855: 81.6019% ( 17) 00:09:45.680 8418.855 - 8469.268: 81.7569% ( 26) 00:09:45.680 8469.268 - 8519.680: 81.8941% ( 23) 00:09:45.680 8519.680 - 8570.092: 82.0432% ( 25) 00:09:45.680 8570.092 - 8620.505: 82.1684% ( 21) 00:09:45.681 8620.505 - 8670.917: 82.2996% ( 22) 00:09:45.681 8670.917 - 8721.329: 82.4129% ( 19) 00:09:45.681 8721.329 - 8771.742: 82.5262% ( 19) 00:09:45.681 8771.742 - 8822.154: 82.6396% ( 19) 00:09:45.681 8822.154 - 8872.566: 82.7171% ( 13) 00:09:45.681 8872.566 - 8922.978: 82.8006% ( 14) 00:09:45.681 8922.978 - 8973.391: 82.8662% ( 11) 00:09:45.681 8973.391 - 9023.803: 82.9377% ( 12) 00:09:45.681 9023.803 - 9074.215: 83.0033% ( 11) 00:09:45.681 9074.215 - 9124.628: 83.0510% ( 8) 00:09:45.681 9124.628 - 9175.040: 83.0928% ( 7) 00:09:45.681 9175.040 - 9225.452: 83.1584% ( 11) 00:09:45.681 9225.452 - 9275.865: 83.2061% ( 8) 00:09:45.681 9275.865 - 9326.277: 83.2717% ( 11) 00:09:45.681 9326.277 - 9376.689: 83.3373% ( 11) 00:09:45.681 9376.689 - 9427.102: 83.3850% ( 8) 00:09:45.681 9427.102 - 9477.514: 83.4864% ( 17) 00:09:45.681 9477.514 - 9527.926: 83.5639% ( 13) 00:09:45.681 9527.926 - 9578.338: 83.6355% ( 12) 00:09:45.681 9578.338 - 9628.751: 83.7727% ( 23) 00:09:45.681 9628.751 - 9679.163: 84.1007% ( 55) 00:09:45.681 9679.163 - 9729.575: 84.2438% ( 24) 00:09:45.681 9729.575 - 9779.988: 84.3631% ( 20) 00:09:45.681 9779.988 - 9830.400: 84.4883% ( 21) 00:09:45.681 9830.400 - 9880.812: 84.6195% ( 22) 00:09:45.681 9880.812 - 9931.225: 84.9177% ( 50) 00:09:45.681 9931.225 - 9981.637: 85.0668% ( 25) 00:09:45.681 9981.637 - 10032.049: 85.1801% ( 19) 00:09:45.681 10032.049 - 10082.462: 85.2934% ( 19) 00:09:45.681 10082.462 - 10132.874: 85.4008% ( 18) 00:09:45.681 10132.874 - 10183.286: 85.5260% ( 21) 00:09:45.681 10183.286 - 10233.698: 85.7824% ( 43) 00:09:45.681 10233.698 - 10284.111: 85.9435% ( 27) 00:09:45.681 10284.111 - 10334.523: 86.0508% ( 18) 00:09:45.681 10334.523 - 10384.935: 86.1582% ( 18) 00:09:45.681 10384.935 - 10435.348: 86.3013% ( 24) 00:09:45.681 10435.348 - 10485.760: 86.4623% ( 27) 00:09:45.681 10485.760 - 10536.172: 86.6531% ( 32) 00:09:45.681 10536.172 - 10586.585: 86.8022% ( 25) 00:09:45.681 10586.585 - 10636.997: 86.9752% ( 29) 00:09:45.681 10636.997 - 10687.409: 87.1243% ( 25) 00:09:45.681 10687.409 - 10737.822: 87.2972% ( 29) 00:09:45.681 10737.822 - 10788.234: 87.5119% ( 36) 00:09:45.681 10788.234 - 10838.646: 87.7087% ( 33) 00:09:45.681 10838.646 - 10889.058: 87.9413% ( 39) 00:09:45.681 10889.058 - 10939.471: 88.1739% ( 39) 00:09:45.681 10939.471 - 10989.883: 88.4244% ( 42) 00:09:45.681 10989.883 - 11040.295: 88.6391% ( 36) 00:09:45.681 11040.295 - 11090.708: 88.8717% ( 39) 00:09:45.681 11090.708 - 11141.120: 89.1281% ( 43) 00:09:45.681 11141.120 - 11191.532: 89.3368% ( 35) 00:09:45.681 11191.532 - 11241.945: 89.5813% ( 41) 00:09:45.681 11241.945 - 11292.357: 89.8259% ( 41) 00:09:45.681 11292.357 - 11342.769: 90.1121% ( 48) 00:09:45.681 11342.769 - 11393.182: 90.4580% ( 58) 00:09:45.681 11393.182 - 11443.594: 90.8039% ( 58) 00:09:45.681 11443.594 - 11494.006: 91.0663% ( 44) 00:09:45.681 11494.006 - 11544.418: 91.3705% ( 51) 00:09:45.681 11544.418 - 11594.831: 91.6627% ( 49) 00:09:45.681 11594.831 - 11645.243: 91.9430% ( 47) 00:09:45.681 11645.243 - 11695.655: 92.2591% ( 53) 00:09:45.681 11695.655 - 11746.068: 92.5871% ( 55) 00:09:45.681 11746.068 - 11796.480: 92.9151% ( 55) 00:09:45.681 11796.480 - 11846.892: 93.3325% ( 70) 00:09:45.681 11846.892 - 11897.305: 93.6665% ( 56) 00:09:45.681 11897.305 - 11947.717: 93.9349% ( 45) 00:09:45.681 11947.717 - 11998.129: 94.2032% ( 45) 00:09:45.681 11998.129 - 12048.542: 94.4835% ( 47) 00:09:45.681 12048.542 - 12098.954: 94.7519% ( 45) 00:09:45.681 12098.954 - 12149.366: 95.0561% ( 51) 00:09:45.681 12149.366 - 12199.778: 95.3125% ( 43) 00:09:45.681 12199.778 - 12250.191: 95.6286% ( 53) 00:09:45.681 12250.191 - 12300.603: 95.8612% ( 39) 00:09:45.681 12300.603 - 12351.015: 96.0818% ( 37) 00:09:45.681 12351.015 - 12401.428: 96.2906% ( 35) 00:09:45.681 12401.428 - 12451.840: 96.4814% ( 32) 00:09:45.681 12451.840 - 12502.252: 96.6782% ( 33) 00:09:45.681 12502.252 - 12552.665: 96.8511% ( 29) 00:09:45.681 12552.665 - 12603.077: 97.0718% ( 37) 00:09:45.681 12603.077 - 12653.489: 97.2626% ( 32) 00:09:45.681 12653.489 - 12703.902: 97.3879% ( 21) 00:09:45.681 12703.902 - 12754.314: 97.5012% ( 19) 00:09:45.681 12754.314 - 12804.726: 97.6085% ( 18) 00:09:45.681 12804.726 - 12855.138: 97.7457% ( 23) 00:09:45.681 12855.138 - 12905.551: 97.8769% ( 22) 00:09:45.681 12905.551 - 13006.375: 98.1155% ( 40) 00:09:45.681 13006.375 - 13107.200: 98.3838% ( 45) 00:09:45.681 13107.200 - 13208.025: 98.6164% ( 39) 00:09:45.681 13208.025 - 13308.849: 98.7953% ( 30) 00:09:45.681 13308.849 - 13409.674: 98.9802% ( 31) 00:09:45.681 13409.674 - 13510.498: 99.0995% ( 20) 00:09:45.681 13510.498 - 13611.323: 99.1830% ( 14) 00:09:45.681 13611.323 - 13712.148: 99.2366% ( 9) 00:09:45.681 24399.557 - 24500.382: 99.2545% ( 3) 00:09:45.681 24500.382 - 24601.206: 99.2784% ( 4) 00:09:45.681 24601.206 - 24702.031: 99.2963% ( 3) 00:09:45.681 24702.031 - 24802.855: 99.3201% ( 4) 00:09:45.681 24802.855 - 24903.680: 99.3380% ( 3) 00:09:45.681 24903.680 - 25004.505: 99.3619% ( 4) 00:09:45.681 25004.505 - 25105.329: 99.3857% ( 4) 00:09:45.681 25105.329 - 25206.154: 99.4036% ( 3) 00:09:45.681 25206.154 - 25306.978: 99.4275% ( 4) 00:09:45.681 25306.978 - 25407.803: 99.4454% ( 3) 00:09:45.681 25407.803 - 25508.628: 99.4692% ( 4) 00:09:45.681 25508.628 - 25609.452: 99.4871% ( 3) 00:09:45.681 25609.452 - 25710.277: 99.5110% ( 4) 00:09:45.681 25710.277 - 25811.102: 99.5348% ( 4) 00:09:45.681 25811.102 - 26012.751: 99.5766% ( 7) 00:09:45.681 26012.751 - 26214.400: 99.6183% ( 7) 00:09:45.681 26214.400 - 26416.049: 99.6601% ( 7) 00:09:45.681 26416.049 - 26617.698: 99.6958% ( 6) 00:09:45.681 26617.698 - 26819.348: 99.7376% ( 7) 00:09:45.681 26819.348 - 27020.997: 99.7793% ( 7) 00:09:45.681 27020.997 - 27222.646: 99.8271% ( 8) 00:09:45.681 27222.646 - 27424.295: 99.8688% ( 7) 00:09:45.681 27424.295 - 27625.945: 99.9105% ( 7) 00:09:45.681 27625.945 - 27827.594: 99.9523% ( 7) 00:09:45.681 27827.594 - 28029.243: 99.9940% ( 7) 00:09:45.681 28029.243 - 28230.892: 100.0000% ( 1) 00:09:45.681 00:09:45.681 Latency histogram for PCIE (0000:00:08.0) NSID 1 from core 0: 00:09:45.681 ============================================================================== 00:09:45.681 Range in us Cumulative IO count 00:09:45.681 5368.911 - 5394.117: 0.0060% ( 1) 00:09:45.681 5444.529 - 5469.735: 0.0119% ( 1) 00:09:45.681 5469.735 - 5494.942: 0.0179% ( 1) 00:09:45.681 5494.942 - 5520.148: 0.0239% ( 1) 00:09:45.681 5520.148 - 5545.354: 0.0417% ( 3) 00:09:45.681 5545.354 - 5570.560: 0.0477% ( 1) 00:09:45.681 5570.560 - 5595.766: 0.0596% ( 2) 00:09:45.681 5595.766 - 5620.972: 0.0835% ( 4) 00:09:45.681 5620.972 - 5646.178: 0.1014% ( 3) 00:09:45.681 5646.178 - 5671.385: 0.1431% ( 7) 00:09:45.681 5671.385 - 5696.591: 0.1908% ( 8) 00:09:45.681 5696.591 - 5721.797: 0.2326% ( 7) 00:09:45.681 5721.797 - 5747.003: 0.3042% ( 12) 00:09:45.681 5747.003 - 5772.209: 0.4413% ( 23) 00:09:45.681 5772.209 - 5797.415: 0.5964% ( 26) 00:09:45.681 5797.415 - 5822.622: 0.8528% ( 43) 00:09:45.681 5822.622 - 5847.828: 1.1093% ( 43) 00:09:45.681 5847.828 - 5873.034: 1.4373% ( 55) 00:09:45.681 5873.034 - 5898.240: 1.8249% ( 65) 00:09:45.681 5898.240 - 5923.446: 2.2781% ( 76) 00:09:45.681 5923.446 - 5948.652: 2.7433% ( 78) 00:09:45.681 5948.652 - 5973.858: 3.3635% ( 104) 00:09:45.681 5973.858 - 5999.065: 4.1746% ( 136) 00:09:45.681 5999.065 - 6024.271: 4.8783% ( 118) 00:09:45.681 6024.271 - 6049.477: 5.5105% ( 106) 00:09:45.681 6049.477 - 6074.683: 6.2977% ( 132) 00:09:45.681 6074.683 - 6099.889: 7.1147% ( 137) 00:09:45.681 6099.889 - 6125.095: 8.2240% ( 186) 00:09:45.681 6125.095 - 6150.302: 9.2915% ( 179) 00:09:45.681 6150.302 - 6175.508: 10.7705% ( 248) 00:09:45.681 6175.508 - 6200.714: 12.3032% ( 257) 00:09:45.681 6200.714 - 6225.920: 13.5377% ( 207) 00:09:45.681 6225.920 - 6251.126: 14.9332% ( 234) 00:09:45.681 6251.126 - 6276.332: 16.0544% ( 188) 00:09:45.681 6276.332 - 6301.538: 17.7302% ( 281) 00:09:45.681 6301.538 - 6326.745: 19.1257% ( 234) 00:09:45.681 6326.745 - 6351.951: 21.4098% ( 383) 00:09:45.681 6351.951 - 6377.157: 23.1572% ( 293) 00:09:45.681 6377.157 - 6402.363: 24.9284% ( 297) 00:09:45.681 6402.363 - 6427.569: 26.7533% ( 306) 00:09:45.681 6427.569 - 6452.775: 28.3278% ( 264) 00:09:45.681 6452.775 - 6503.188: 31.9716% ( 611) 00:09:45.681 6503.188 - 6553.600: 34.8163% ( 477) 00:09:45.681 6553.600 - 6604.012: 37.5954% ( 466) 00:09:45.681 6604.012 - 6654.425: 41.0484% ( 579) 00:09:45.681 6654.425 - 6704.837: 45.0680% ( 674) 00:09:45.681 6704.837 - 6755.249: 48.8729% ( 638) 00:09:45.681 6755.249 - 6805.662: 52.3378% ( 581) 00:09:45.681 6805.662 - 6856.074: 55.9578% ( 607) 00:09:45.681 6856.074 - 6906.486: 58.7309% ( 465) 00:09:45.681 6906.486 - 6956.898: 61.5935% ( 480) 00:09:45.681 6956.898 - 7007.311: 63.5794% ( 333) 00:09:45.681 7007.311 - 7057.723: 65.7502% ( 364) 00:09:45.681 7057.723 - 7108.135: 67.5036% ( 294) 00:09:45.681 7108.135 - 7158.548: 69.0363% ( 257) 00:09:45.681 7158.548 - 7208.960: 70.5689% ( 257) 00:09:45.681 7208.960 - 7259.372: 71.6305% ( 178) 00:09:45.681 7259.372 - 7309.785: 72.5906% ( 161) 00:09:45.681 7309.785 - 7360.197: 73.4733% ( 148) 00:09:45.681 7360.197 - 7410.609: 74.3500% ( 147) 00:09:45.681 7410.609 - 7461.022: 75.2385% ( 149) 00:09:45.681 7461.022 - 7511.434: 76.0675% ( 139) 00:09:45.681 7511.434 - 7561.846: 76.8786% ( 136) 00:09:45.681 7561.846 - 7612.258: 77.7254% ( 142) 00:09:45.681 7612.258 - 7662.671: 78.3993% ( 113) 00:09:45.681 7662.671 - 7713.083: 78.8049% ( 68) 00:09:45.682 7713.083 - 7763.495: 79.1508% ( 58) 00:09:45.682 7763.495 - 7813.908: 79.3774% ( 38) 00:09:45.682 7813.908 - 7864.320: 79.5682% ( 32) 00:09:45.682 7864.320 - 7914.732: 79.9320% ( 61) 00:09:45.682 7914.732 - 7965.145: 80.2183% ( 48) 00:09:45.682 7965.145 - 8015.557: 80.3614% ( 24) 00:09:45.682 8015.557 - 8065.969: 80.4986% ( 23) 00:09:45.682 8065.969 - 8116.382: 80.8206% ( 54) 00:09:45.682 8116.382 - 8166.794: 80.9697% ( 25) 00:09:45.682 8166.794 - 8217.206: 81.1128% ( 24) 00:09:45.682 8217.206 - 8267.618: 81.2858% ( 29) 00:09:45.682 8267.618 - 8318.031: 81.6436% ( 60) 00:09:45.682 8318.031 - 8368.443: 81.8285% ( 31) 00:09:45.682 8368.443 - 8418.855: 81.9656% ( 23) 00:09:45.682 8418.855 - 8469.268: 82.0909% ( 21) 00:09:45.682 8469.268 - 8519.680: 82.2102% ( 20) 00:09:45.682 8519.680 - 8570.092: 82.3115% ( 17) 00:09:45.682 8570.092 - 8620.505: 82.4189% ( 18) 00:09:45.682 8620.505 - 8670.917: 82.4964% ( 13) 00:09:45.682 8670.917 - 8721.329: 82.5978% ( 17) 00:09:45.682 8721.329 - 8771.742: 82.6992% ( 17) 00:09:45.682 8771.742 - 8822.154: 82.7886% ( 15) 00:09:45.682 8822.154 - 8872.566: 83.0272% ( 40) 00:09:45.682 8872.566 - 8922.978: 83.3433% ( 53) 00:09:45.682 8922.978 - 8973.391: 83.4447% ( 17) 00:09:45.682 8973.391 - 9023.803: 83.5401% ( 16) 00:09:45.682 9023.803 - 9074.215: 83.6415% ( 17) 00:09:45.682 9074.215 - 9124.628: 83.7250% ( 14) 00:09:45.682 9124.628 - 9175.040: 83.7965% ( 12) 00:09:45.682 9175.040 - 9225.452: 83.9098% ( 19) 00:09:45.682 9225.452 - 9275.865: 84.0112% ( 17) 00:09:45.682 9275.865 - 9326.277: 84.1007% ( 15) 00:09:45.682 9326.277 - 9376.689: 84.1961% ( 16) 00:09:45.682 9376.689 - 9427.102: 84.2736% ( 13) 00:09:45.682 9427.102 - 9477.514: 84.3571% ( 14) 00:09:45.682 9477.514 - 9527.926: 84.4406% ( 14) 00:09:45.682 9527.926 - 9578.338: 84.7090% ( 45) 00:09:45.682 9578.338 - 9628.751: 84.9177% ( 35) 00:09:45.682 9628.751 - 9679.163: 84.9833% ( 11) 00:09:45.682 9679.163 - 9729.575: 85.0310% ( 8) 00:09:45.682 9729.575 - 9779.988: 85.0608% ( 5) 00:09:45.682 9779.988 - 9830.400: 85.1026% ( 7) 00:09:45.682 9830.400 - 9880.812: 85.1503% ( 8) 00:09:45.682 9880.812 - 9931.225: 85.2159% ( 11) 00:09:45.682 9931.225 - 9981.637: 85.2815% ( 11) 00:09:45.682 9981.637 - 10032.049: 85.3411% ( 10) 00:09:45.682 10032.049 - 10082.462: 85.4365% ( 16) 00:09:45.682 10082.462 - 10132.874: 85.5558% ( 20) 00:09:45.682 10132.874 - 10183.286: 85.6691% ( 19) 00:09:45.682 10183.286 - 10233.698: 85.8480% ( 30) 00:09:45.682 10233.698 - 10284.111: 85.9852% ( 23) 00:09:45.682 10284.111 - 10334.523: 86.1403% ( 26) 00:09:45.682 10334.523 - 10384.935: 86.3132% ( 29) 00:09:45.682 10384.935 - 10435.348: 86.4623% ( 25) 00:09:45.682 10435.348 - 10485.760: 86.6114% ( 25) 00:09:45.682 10485.760 - 10536.172: 86.8201% ( 35) 00:09:45.682 10536.172 - 10586.585: 87.0169% ( 33) 00:09:45.682 10586.585 - 10636.997: 87.3092% ( 49) 00:09:45.682 10636.997 - 10687.409: 87.5537% ( 41) 00:09:45.682 10687.409 - 10737.822: 87.7863% ( 39) 00:09:45.682 10737.822 - 10788.234: 87.9771% ( 32) 00:09:45.682 10788.234 - 10838.646: 88.1918% ( 36) 00:09:45.682 10838.646 - 10889.058: 88.4065% ( 36) 00:09:45.682 10889.058 - 10939.471: 88.6271% ( 37) 00:09:45.682 10939.471 - 10989.883: 88.8478% ( 37) 00:09:45.682 10989.883 - 11040.295: 89.0864% ( 40) 00:09:45.682 11040.295 - 11090.708: 89.4323% ( 58) 00:09:45.682 11090.708 - 11141.120: 89.7066% ( 46) 00:09:45.682 11141.120 - 11191.532: 90.0704% ( 61) 00:09:45.682 11191.532 - 11241.945: 90.3865% ( 53) 00:09:45.682 11241.945 - 11292.357: 90.7502% ( 61) 00:09:45.682 11292.357 - 11342.769: 90.9948% ( 41) 00:09:45.682 11342.769 - 11393.182: 91.2333% ( 40) 00:09:45.682 11393.182 - 11443.594: 91.4540% ( 37) 00:09:45.682 11443.594 - 11494.006: 91.6925% ( 40) 00:09:45.682 11494.006 - 11544.418: 91.9609% ( 45) 00:09:45.682 11544.418 - 11594.831: 92.3068% ( 58) 00:09:45.682 11594.831 - 11645.243: 92.6109% ( 51) 00:09:45.682 11645.243 - 11695.655: 92.9449% ( 56) 00:09:45.682 11695.655 - 11746.068: 93.1954% ( 42) 00:09:45.682 11746.068 - 11796.480: 93.4399% ( 41) 00:09:45.682 11796.480 - 11846.892: 93.6367% ( 33) 00:09:45.682 11846.892 - 11897.305: 93.8454% ( 35) 00:09:45.682 11897.305 - 11947.717: 94.0303% ( 31) 00:09:45.682 11947.717 - 11998.129: 94.2390% ( 35) 00:09:45.682 11998.129 - 12048.542: 94.4418% ( 34) 00:09:45.682 12048.542 - 12098.954: 94.6565% ( 36) 00:09:45.682 12098.954 - 12149.366: 94.8712% ( 36) 00:09:45.682 12149.366 - 12199.778: 95.0799% ( 35) 00:09:45.682 12199.778 - 12250.191: 95.2648% ( 31) 00:09:45.682 12250.191 - 12300.603: 95.4020% ( 23) 00:09:45.682 12300.603 - 12351.015: 95.5272% ( 21) 00:09:45.682 12351.015 - 12401.428: 95.6584% ( 22) 00:09:45.682 12401.428 - 12451.840: 95.8135% ( 26) 00:09:45.682 12451.840 - 12502.252: 95.9566% ( 24) 00:09:45.682 12502.252 - 12552.665: 96.0759% ( 20) 00:09:45.682 12552.665 - 12603.077: 96.1951% ( 20) 00:09:45.682 12603.077 - 12653.489: 96.2965% ( 17) 00:09:45.682 12653.489 - 12703.902: 96.3860% ( 15) 00:09:45.682 12703.902 - 12754.314: 96.5828% ( 33) 00:09:45.682 12754.314 - 12804.726: 96.6603% ( 13) 00:09:45.682 12804.726 - 12855.138: 96.7259% ( 11) 00:09:45.682 12855.138 - 12905.551: 96.7915% ( 11) 00:09:45.682 12905.551 - 13006.375: 96.9108% ( 20) 00:09:45.682 13006.375 - 13107.200: 97.0718% ( 27) 00:09:45.682 13107.200 - 13208.025: 97.2030% ( 22) 00:09:45.682 13208.025 - 13308.849: 97.2984% ( 16) 00:09:45.682 13308.849 - 13409.674: 97.4237% ( 21) 00:09:45.682 13409.674 - 13510.498: 97.6085% ( 31) 00:09:45.682 13510.498 - 13611.323: 97.7517% ( 24) 00:09:45.682 13611.323 - 13712.148: 97.8829% ( 22) 00:09:45.682 13712.148 - 13812.972: 98.0021% ( 20) 00:09:45.682 13812.972 - 13913.797: 98.0737% ( 12) 00:09:45.682 13913.797 - 14014.622: 98.1453% ( 12) 00:09:45.682 14014.622 - 14115.446: 98.1811% ( 6) 00:09:45.682 14115.446 - 14216.271: 98.2407% ( 10) 00:09:45.682 14216.271 - 14317.095: 98.2884% ( 8) 00:09:45.682 14317.095 - 14417.920: 98.3361% ( 8) 00:09:45.682 14417.920 - 14518.745: 98.4136% ( 13) 00:09:45.682 14518.745 - 14619.569: 98.5031% ( 15) 00:09:45.682 14619.569 - 14720.394: 98.5866% ( 14) 00:09:45.682 14720.394 - 14821.218: 98.6522% ( 11) 00:09:45.682 14821.218 - 14922.043: 98.7536% ( 17) 00:09:45.682 14922.043 - 15022.868: 98.9981% ( 41) 00:09:45.682 15022.868 - 15123.692: 99.0816% ( 14) 00:09:45.682 15123.692 - 15224.517: 99.1472% ( 11) 00:09:45.682 15224.517 - 15325.342: 99.2068% ( 10) 00:09:45.682 15325.342 - 15426.166: 99.2188% ( 2) 00:09:45.682 15426.166 - 15526.991: 99.2366% ( 3) 00:09:45.682 23088.837 - 23189.662: 99.2486% ( 2) 00:09:45.682 23189.662 - 23290.486: 99.2724% ( 4) 00:09:45.682 23290.486 - 23391.311: 99.2903% ( 3) 00:09:45.682 23391.311 - 23492.135: 99.3142% ( 4) 00:09:45.682 23492.135 - 23592.960: 99.3321% ( 3) 00:09:45.682 23592.960 - 23693.785: 99.3559% ( 4) 00:09:45.682 23693.785 - 23794.609: 99.3738% ( 3) 00:09:45.682 23794.609 - 23895.434: 99.3977% ( 4) 00:09:45.682 23895.434 - 23996.258: 99.4215% ( 4) 00:09:45.682 23996.258 - 24097.083: 99.4394% ( 3) 00:09:45.682 24097.083 - 24197.908: 99.4633% ( 4) 00:09:45.682 24197.908 - 24298.732: 99.4812% ( 3) 00:09:45.682 24298.732 - 24399.557: 99.5050% ( 4) 00:09:45.682 24399.557 - 24500.382: 99.5289% ( 4) 00:09:45.682 24500.382 - 24601.206: 99.5468% ( 3) 00:09:45.682 24601.206 - 24702.031: 99.5706% ( 4) 00:09:45.682 24702.031 - 24802.855: 99.5825% ( 2) 00:09:45.682 24802.855 - 24903.680: 99.6064% ( 4) 00:09:45.682 24903.680 - 25004.505: 99.6302% ( 4) 00:09:45.682 25004.505 - 25105.329: 99.6481% ( 3) 00:09:45.682 25105.329 - 25206.154: 99.6720% ( 4) 00:09:45.682 25206.154 - 25306.978: 99.6958% ( 4) 00:09:45.682 25306.978 - 25407.803: 99.7137% ( 3) 00:09:45.682 25407.803 - 25508.628: 99.7376% ( 4) 00:09:45.682 25508.628 - 25609.452: 99.7555% ( 3) 00:09:45.682 25609.452 - 25710.277: 99.7793% ( 4) 00:09:45.682 25710.277 - 25811.102: 99.7972% ( 3) 00:09:45.682 25811.102 - 26012.751: 99.8390% ( 7) 00:09:45.682 26012.751 - 26214.400: 99.8807% ( 7) 00:09:45.682 26214.400 - 26416.049: 99.9225% ( 7) 00:09:45.682 26416.049 - 26617.698: 99.9642% ( 7) 00:09:45.682 26617.698 - 26819.348: 100.0000% ( 6) 00:09:45.682 00:09:45.682 Latency histogram for PCIE (0000:00:08.0) NSID 2 from core 0: 00:09:45.682 ============================================================================== 00:09:45.683 Range in us Cumulative IO count 00:09:45.683 5293.292 - 5318.498: 0.0060% ( 1) 00:09:45.683 5343.705 - 5368.911: 0.0119% ( 1) 00:09:45.683 5394.117 - 5419.323: 0.0179% ( 1) 00:09:45.683 5494.942 - 5520.148: 0.0239% ( 1) 00:09:45.683 5520.148 - 5545.354: 0.0298% ( 1) 00:09:45.683 5545.354 - 5570.560: 0.0358% ( 1) 00:09:45.683 5570.560 - 5595.766: 0.0716% ( 6) 00:09:45.683 5595.766 - 5620.972: 0.0954% ( 4) 00:09:45.683 5620.972 - 5646.178: 0.1372% ( 7) 00:09:45.683 5646.178 - 5671.385: 0.1610% ( 4) 00:09:45.683 5671.385 - 5696.591: 0.2147% ( 9) 00:09:45.683 5696.591 - 5721.797: 0.2803% ( 11) 00:09:45.683 5721.797 - 5747.003: 0.3817% ( 17) 00:09:45.683 5747.003 - 5772.209: 0.5367% ( 26) 00:09:45.683 5772.209 - 5797.415: 0.7693% ( 39) 00:09:45.683 5797.415 - 5822.622: 1.0198% ( 42) 00:09:45.683 5822.622 - 5847.828: 1.3955% ( 63) 00:09:45.683 5847.828 - 5873.034: 1.7414% ( 58) 00:09:45.683 5873.034 - 5898.240: 2.1410% ( 67) 00:09:45.683 5898.240 - 5923.446: 2.5823% ( 74) 00:09:45.683 5923.446 - 5948.652: 3.1190% ( 90) 00:09:45.683 5948.652 - 5973.858: 3.6796% ( 94) 00:09:45.683 5973.858 - 5999.065: 4.2700% ( 99) 00:09:45.683 5999.065 - 6024.271: 4.9141% ( 108) 00:09:45.683 6024.271 - 6049.477: 5.5880% ( 113) 00:09:45.683 6049.477 - 6074.683: 6.4468% ( 144) 00:09:45.683 6074.683 - 6099.889: 7.3533% ( 152) 00:09:45.683 6099.889 - 6125.095: 8.4804% ( 189) 00:09:45.683 6125.095 - 6150.302: 9.5241% ( 175) 00:09:45.683 6150.302 - 6175.508: 11.2536% ( 290) 00:09:45.683 6175.508 - 6200.714: 12.6551% ( 235) 00:09:45.683 6200.714 - 6225.920: 14.3189% ( 279) 00:09:45.683 6225.920 - 6251.126: 16.2929% ( 331) 00:09:45.683 6251.126 - 6276.332: 17.5930% ( 218) 00:09:45.683 6276.332 - 6301.538: 18.9707% ( 231) 00:09:45.683 6301.538 - 6326.745: 20.4854% ( 254) 00:09:45.683 6326.745 - 6351.951: 22.2805% ( 301) 00:09:45.683 6351.951 - 6377.157: 23.7536% ( 247) 00:09:45.683 6377.157 - 6402.363: 25.1312% ( 231) 00:09:45.683 6402.363 - 6427.569: 27.1112% ( 332) 00:09:45.683 6427.569 - 6452.775: 28.4590% ( 226) 00:09:45.683 6452.775 - 6503.188: 32.2937% ( 643) 00:09:45.683 6503.188 - 6553.600: 35.8063% ( 589) 00:09:45.683 6553.600 - 6604.012: 38.8359% ( 508) 00:09:45.683 6604.012 - 6654.425: 42.0802% ( 544) 00:09:45.683 6654.425 - 6704.837: 45.9506% ( 649) 00:09:45.683 6704.837 - 6755.249: 49.4036% ( 579) 00:09:45.683 6755.249 - 6805.662: 52.8268% ( 574) 00:09:45.683 6805.662 - 6856.074: 56.3216% ( 586) 00:09:45.683 6856.074 - 6906.486: 58.7727% ( 411) 00:09:45.683 6906.486 - 6956.898: 61.8142% ( 510) 00:09:45.683 6956.898 - 7007.311: 64.5635% ( 461) 00:09:45.683 7007.311 - 7057.723: 66.5375% ( 331) 00:09:45.683 7057.723 - 7108.135: 67.9389% ( 235) 00:09:45.683 7108.135 - 7158.548: 69.6684% ( 290) 00:09:45.683 7158.548 - 7208.960: 70.9029% ( 207) 00:09:45.683 7208.960 - 7259.372: 72.4117% ( 253) 00:09:45.683 7259.372 - 7309.785: 73.3540% ( 158) 00:09:45.683 7309.785 - 7360.197: 74.3022% ( 159) 00:09:45.683 7360.197 - 7410.609: 75.2743% ( 163) 00:09:45.683 7410.609 - 7461.022: 76.1629% ( 149) 00:09:45.683 7461.022 - 7511.434: 76.9680% ( 135) 00:09:45.683 7511.434 - 7561.846: 77.7374% ( 129) 00:09:45.683 7561.846 - 7612.258: 78.3576% ( 104) 00:09:45.683 7612.258 - 7662.671: 78.8526% ( 83) 00:09:45.683 7662.671 - 7713.083: 79.3476% ( 83) 00:09:45.683 7713.083 - 7763.495: 79.7770% ( 72) 00:09:45.683 7763.495 - 7813.908: 80.4151% ( 107) 00:09:45.683 7813.908 - 7864.320: 80.6059% ( 32) 00:09:45.683 7864.320 - 7914.732: 80.7550% ( 25) 00:09:45.683 7914.732 - 7965.145: 80.8624% ( 18) 00:09:45.683 7965.145 - 8015.557: 80.9816% ( 20) 00:09:45.683 8015.557 - 8065.969: 81.1188% ( 23) 00:09:45.683 8065.969 - 8116.382: 81.2619% ( 24) 00:09:45.683 8116.382 - 8166.794: 81.4110% ( 25) 00:09:45.683 8166.794 - 8217.206: 81.5959% ( 31) 00:09:45.683 8217.206 - 8267.618: 81.6913% ( 16) 00:09:45.683 8267.618 - 8318.031: 81.7867% ( 16) 00:09:45.683 8318.031 - 8368.443: 81.9358% ( 25) 00:09:45.683 8368.443 - 8418.855: 82.1207% ( 31) 00:09:45.683 8418.855 - 8469.268: 82.1863% ( 11) 00:09:45.683 8469.268 - 8519.680: 82.2340% ( 8) 00:09:45.683 8519.680 - 8570.092: 82.2996% ( 11) 00:09:45.683 8570.092 - 8620.505: 82.3712% ( 12) 00:09:45.683 8620.505 - 8670.917: 82.4726% ( 17) 00:09:45.683 8670.917 - 8721.329: 82.5740% ( 17) 00:09:45.683 8721.329 - 8771.742: 82.6634% ( 15) 00:09:45.683 8771.742 - 8822.154: 82.7648% ( 17) 00:09:45.683 8822.154 - 8872.566: 82.9079% ( 24) 00:09:45.683 8872.566 - 8922.978: 83.0570% ( 25) 00:09:45.683 8922.978 - 8973.391: 83.2598% ( 34) 00:09:45.683 8973.391 - 9023.803: 83.5938% ( 56) 00:09:45.683 9023.803 - 9074.215: 83.8144% ( 37) 00:09:45.683 9074.215 - 9124.628: 83.9158% ( 17) 00:09:45.683 9124.628 - 9175.040: 84.0052% ( 15) 00:09:45.683 9175.040 - 9225.452: 84.0708% ( 11) 00:09:45.683 9225.452 - 9275.865: 84.1365% ( 11) 00:09:45.683 9275.865 - 9326.277: 84.1901% ( 9) 00:09:45.683 9326.277 - 9376.689: 84.3034% ( 19) 00:09:45.683 9376.689 - 9427.102: 84.6493% ( 58) 00:09:45.683 9427.102 - 9477.514: 84.7209% ( 12) 00:09:45.683 9477.514 - 9527.926: 84.7686% ( 8) 00:09:45.683 9527.926 - 9578.338: 84.8104% ( 7) 00:09:45.683 9578.338 - 9628.751: 84.8700% ( 10) 00:09:45.683 9628.751 - 9679.163: 84.9296% ( 10) 00:09:45.683 9679.163 - 9729.575: 84.9714% ( 7) 00:09:45.683 9729.575 - 9779.988: 85.0131% ( 7) 00:09:45.683 9779.988 - 9830.400: 85.0549% ( 7) 00:09:45.683 9830.400 - 9880.812: 85.1443% ( 15) 00:09:45.683 9880.812 - 9931.225: 85.1980% ( 9) 00:09:45.683 9931.225 - 9981.637: 85.2636% ( 11) 00:09:45.683 9981.637 - 10032.049: 85.3232% ( 10) 00:09:45.683 10032.049 - 10082.462: 85.3888% ( 11) 00:09:45.683 10082.462 - 10132.874: 85.4365% ( 8) 00:09:45.683 10132.874 - 10183.286: 85.5260% ( 15) 00:09:45.683 10183.286 - 10233.698: 85.6572% ( 22) 00:09:45.683 10233.698 - 10284.111: 85.8063% ( 25) 00:09:45.683 10284.111 - 10334.523: 85.9017% ( 16) 00:09:45.683 10334.523 - 10384.935: 85.9852% ( 14) 00:09:45.683 10384.935 - 10435.348: 86.0568% ( 12) 00:09:45.683 10435.348 - 10485.760: 86.1701% ( 19) 00:09:45.683 10485.760 - 10536.172: 86.3013% ( 22) 00:09:45.683 10536.172 - 10586.585: 86.4504% ( 25) 00:09:45.683 10586.585 - 10636.997: 86.6054% ( 26) 00:09:45.683 10636.997 - 10687.409: 86.7844% ( 30) 00:09:45.683 10687.409 - 10737.822: 86.9573% ( 29) 00:09:45.683 10737.822 - 10788.234: 87.1780% ( 37) 00:09:45.683 10788.234 - 10838.646: 87.3807% ( 34) 00:09:45.683 10838.646 - 10889.058: 87.6193% ( 40) 00:09:45.683 10889.058 - 10939.471: 87.8698% ( 42) 00:09:45.683 10939.471 - 10989.883: 88.1262% ( 43) 00:09:45.683 10989.883 - 11040.295: 88.4542% ( 55) 00:09:45.683 11040.295 - 11090.708: 88.7822% ( 55) 00:09:45.683 11090.708 - 11141.120: 89.1281% ( 58) 00:09:45.683 11141.120 - 11191.532: 89.5098% ( 64) 00:09:45.683 11191.532 - 11241.945: 89.8020% ( 49) 00:09:45.683 11241.945 - 11292.357: 90.3149% ( 86) 00:09:45.683 11292.357 - 11342.769: 90.7860% ( 79) 00:09:45.683 11342.769 - 11393.182: 91.1498% ( 61) 00:09:45.683 11393.182 - 11443.594: 91.4420% ( 49) 00:09:45.683 11443.594 - 11494.006: 91.8058% ( 61) 00:09:45.683 11494.006 - 11544.418: 92.1159% ( 52) 00:09:45.683 11544.418 - 11594.831: 92.4857% ( 62) 00:09:45.683 11594.831 - 11645.243: 92.8256% ( 57) 00:09:45.683 11645.243 - 11695.655: 93.1894% ( 61) 00:09:45.683 11695.655 - 11746.068: 93.4518% ( 44) 00:09:45.683 11746.068 - 11796.480: 93.7321% ( 47) 00:09:45.683 11796.480 - 11846.892: 94.0124% ( 47) 00:09:45.683 11846.892 - 11897.305: 94.2629% ( 42) 00:09:45.683 11897.305 - 11947.717: 94.4358% ( 29) 00:09:45.683 11947.717 - 11998.129: 94.6267% ( 32) 00:09:45.683 11998.129 - 12048.542: 94.7996% ( 29) 00:09:45.683 12048.542 - 12098.954: 94.9726% ( 29) 00:09:45.683 12098.954 - 12149.366: 95.1217% ( 25) 00:09:45.683 12149.366 - 12199.778: 95.2827% ( 27) 00:09:45.683 12199.778 - 12250.191: 95.4139% ( 22) 00:09:45.683 12250.191 - 12300.603: 95.5391% ( 21) 00:09:45.683 12300.603 - 12351.015: 95.6584% ( 20) 00:09:45.683 12351.015 - 12401.428: 95.7777% ( 20) 00:09:45.683 12401.428 - 12451.840: 95.8731% ( 16) 00:09:45.683 12451.840 - 12502.252: 95.9804% ( 18) 00:09:45.683 12502.252 - 12552.665: 96.1057% ( 21) 00:09:45.683 12552.665 - 12603.077: 96.1832% ( 13) 00:09:45.683 12603.077 - 12653.489: 96.2488% ( 11) 00:09:45.683 12653.489 - 12703.902: 96.3204% ( 12) 00:09:45.683 12703.902 - 12754.314: 96.3919% ( 12) 00:09:45.683 12754.314 - 12804.726: 96.4396% ( 8) 00:09:45.683 12804.726 - 12855.138: 96.4933% ( 9) 00:09:45.683 12855.138 - 12905.551: 96.5530% ( 10) 00:09:45.683 12905.551 - 13006.375: 96.6484% ( 16) 00:09:45.683 13006.375 - 13107.200: 96.7617% ( 19) 00:09:45.683 13107.200 - 13208.025: 96.8511% ( 15) 00:09:45.683 13208.025 - 13308.849: 96.9048% ( 9) 00:09:45.683 13308.849 - 13409.674: 96.9645% ( 10) 00:09:45.683 13409.674 - 13510.498: 97.0002% ( 6) 00:09:45.683 13510.498 - 13611.323: 97.0241% ( 4) 00:09:45.683 13611.323 - 13712.148: 97.0599% ( 6) 00:09:45.683 13712.148 - 13812.972: 97.1255% ( 11) 00:09:45.683 13812.972 - 13913.797: 97.2090% ( 14) 00:09:45.683 13913.797 - 14014.622: 97.3938% ( 31) 00:09:45.683 14014.622 - 14115.446: 97.4893% ( 16) 00:09:45.683 14115.446 - 14216.271: 97.5250% ( 6) 00:09:45.683 14216.271 - 14317.095: 97.5549% ( 5) 00:09:45.683 14317.095 - 14417.920: 97.5906% ( 6) 00:09:45.683 14417.920 - 14518.745: 97.6264% ( 6) 00:09:45.683 14518.745 - 14619.569: 97.7278% ( 17) 00:09:45.683 14619.569 - 14720.394: 97.8531% ( 21) 00:09:45.683 14720.394 - 14821.218: 98.0260% ( 29) 00:09:45.684 14821.218 - 14922.043: 98.4077% ( 64) 00:09:45.684 14922.043 - 15022.868: 98.5150% ( 18) 00:09:45.684 15022.868 - 15123.692: 98.6104% ( 16) 00:09:45.684 15123.692 - 15224.517: 98.7178% ( 18) 00:09:45.684 15224.517 - 15325.342: 98.9385% ( 37) 00:09:45.684 15325.342 - 15426.166: 99.0041% ( 11) 00:09:45.684 15426.166 - 15526.991: 99.0637% ( 10) 00:09:45.684 15526.991 - 15627.815: 99.1174% ( 9) 00:09:45.684 15627.815 - 15728.640: 99.1531% ( 6) 00:09:45.684 15728.640 - 15829.465: 99.1889% ( 6) 00:09:45.684 15829.465 - 15930.289: 99.2247% ( 6) 00:09:45.684 15930.289 - 16031.114: 99.2366% ( 2) 00:09:45.684 21778.117 - 21878.942: 99.2545% ( 3) 00:09:45.684 21878.942 - 21979.766: 99.2724% ( 3) 00:09:45.684 21979.766 - 22080.591: 99.2963% ( 4) 00:09:45.684 22080.591 - 22181.415: 99.3142% ( 3) 00:09:45.684 22181.415 - 22282.240: 99.3380% ( 4) 00:09:45.684 22282.240 - 22383.065: 99.3619% ( 4) 00:09:45.684 22383.065 - 22483.889: 99.3798% ( 3) 00:09:45.684 22483.889 - 22584.714: 99.4036% ( 4) 00:09:45.684 22584.714 - 22685.538: 99.4215% ( 3) 00:09:45.684 22685.538 - 22786.363: 99.4394% ( 3) 00:09:45.684 22786.363 - 22887.188: 99.4633% ( 4) 00:09:45.684 22887.188 - 22988.012: 99.4871% ( 4) 00:09:45.684 22988.012 - 23088.837: 99.5050% ( 3) 00:09:45.684 23088.837 - 23189.662: 99.5289% ( 4) 00:09:45.684 23189.662 - 23290.486: 99.5527% ( 4) 00:09:45.684 23290.486 - 23391.311: 99.5706% ( 3) 00:09:45.684 23391.311 - 23492.135: 99.5945% ( 4) 00:09:45.684 23492.135 - 23592.960: 99.6124% ( 3) 00:09:45.684 23592.960 - 23693.785: 99.6362% ( 4) 00:09:45.684 23693.785 - 23794.609: 99.6541% ( 3) 00:09:45.684 23794.609 - 23895.434: 99.6720% ( 3) 00:09:45.684 23895.434 - 23996.258: 99.6958% ( 4) 00:09:45.684 23996.258 - 24097.083: 99.7197% ( 4) 00:09:45.684 24097.083 - 24197.908: 99.7376% ( 3) 00:09:45.684 24197.908 - 24298.732: 99.7555% ( 3) 00:09:45.684 24298.732 - 24399.557: 99.7793% ( 4) 00:09:45.684 24399.557 - 24500.382: 99.8032% ( 4) 00:09:45.684 24500.382 - 24601.206: 99.8211% ( 3) 00:09:45.684 24601.206 - 24702.031: 99.8449% ( 4) 00:09:45.684 24702.031 - 24802.855: 99.8688% ( 4) 00:09:45.684 24802.855 - 24903.680: 99.8867% ( 3) 00:09:45.684 24903.680 - 25004.505: 99.9105% ( 4) 00:09:45.684 25004.505 - 25105.329: 99.9284% ( 3) 00:09:45.684 25105.329 - 25206.154: 99.9523% ( 4) 00:09:45.684 25206.154 - 25306.978: 99.9761% ( 4) 00:09:45.684 25306.978 - 25407.803: 99.9940% ( 3) 00:09:45.684 25407.803 - 25508.628: 100.0000% ( 1) 00:09:45.684 00:09:45.684 Latency histogram for PCIE (0000:00:08.0) NSID 3 from core 0: 00:09:45.684 ============================================================================== 00:09:45.684 Range in us Cumulative IO count 00:09:45.684 5192.468 - 5217.674: 0.0059% ( 1) 00:09:45.684 5217.674 - 5242.880: 0.0118% ( 1) 00:09:45.684 5242.880 - 5268.086: 0.0178% ( 1) 00:09:45.684 5293.292 - 5318.498: 0.0237% ( 1) 00:09:45.684 5318.498 - 5343.705: 0.0355% ( 2) 00:09:45.684 5394.117 - 5419.323: 0.0414% ( 1) 00:09:45.684 5444.529 - 5469.735: 0.0533% ( 2) 00:09:45.684 5469.735 - 5494.942: 0.0651% ( 2) 00:09:45.684 5494.942 - 5520.148: 0.0888% ( 4) 00:09:45.684 5520.148 - 5545.354: 0.1184% ( 5) 00:09:45.684 5545.354 - 5570.560: 0.1598% ( 7) 00:09:45.684 5570.560 - 5595.766: 0.2012% ( 7) 00:09:45.684 5595.766 - 5620.972: 0.2131% ( 2) 00:09:45.684 5620.972 - 5646.178: 0.2486% ( 6) 00:09:45.684 5646.178 - 5671.385: 0.3196% ( 12) 00:09:45.684 5671.385 - 5696.591: 0.4084% ( 15) 00:09:45.684 5696.591 - 5721.797: 0.5090% ( 17) 00:09:45.684 5721.797 - 5747.003: 0.6214% ( 19) 00:09:45.684 5747.003 - 5772.209: 0.7517% ( 22) 00:09:45.684 5772.209 - 5797.415: 0.9055% ( 26) 00:09:45.684 5797.415 - 5822.622: 1.0890% ( 31) 00:09:45.684 5822.622 - 5847.828: 1.3790% ( 49) 00:09:45.684 5847.828 - 5873.034: 1.6927% ( 53) 00:09:45.684 5873.034 - 5898.240: 2.0537% ( 61) 00:09:45.684 5898.240 - 5923.446: 2.5095% ( 77) 00:09:45.684 5923.446 - 5948.652: 3.0362% ( 89) 00:09:45.684 5948.652 - 5973.858: 3.6103% ( 97) 00:09:45.684 5973.858 - 5999.065: 4.2259% ( 104) 00:09:45.684 5999.065 - 6024.271: 4.8000% ( 97) 00:09:45.684 6024.271 - 6049.477: 5.4155% ( 104) 00:09:45.684 6049.477 - 6074.683: 6.2145% ( 135) 00:09:45.684 6074.683 - 6099.889: 6.9898% ( 131) 00:09:45.684 6099.889 - 6125.095: 8.4044% ( 239) 00:09:45.684 6125.095 - 6150.302: 9.4756% ( 181) 00:09:45.684 6150.302 - 6175.508: 10.9316% ( 246) 00:09:45.684 6175.508 - 6200.714: 12.4290% ( 253) 00:09:45.684 6200.714 - 6225.920: 13.7902% ( 230) 00:09:45.684 6225.920 - 6251.126: 15.1456% ( 229) 00:09:45.684 6251.126 - 6276.332: 16.9152% ( 299) 00:09:45.684 6276.332 - 6301.538: 18.7263% ( 306) 00:09:45.684 6301.538 - 6326.745: 20.1645% ( 243) 00:09:45.684 6326.745 - 6351.951: 21.8987% ( 293) 00:09:45.684 6351.951 - 6377.157: 23.4730% ( 266) 00:09:45.684 6377.157 - 6402.363: 24.8935% ( 240) 00:09:45.684 6402.363 - 6427.569: 26.3080% ( 239) 00:09:45.684 6427.569 - 6452.775: 28.1368% ( 309) 00:09:45.684 6452.775 - 6503.188: 32.8954% ( 804) 00:09:45.684 6503.188 - 6553.600: 36.0263% ( 529) 00:09:45.684 6553.600 - 6604.012: 39.5478% ( 595) 00:09:45.684 6604.012 - 6654.425: 42.8504% ( 558) 00:09:45.684 6654.425 - 6704.837: 45.7860% ( 496) 00:09:45.684 6704.837 - 6755.249: 48.9347% ( 532) 00:09:45.684 6755.249 - 6805.662: 52.2550% ( 561) 00:09:45.684 6805.662 - 6856.074: 55.4273% ( 536) 00:09:45.684 6856.074 - 6906.486: 58.3037% ( 486) 00:09:45.684 6906.486 - 6956.898: 60.6534% ( 397) 00:09:45.684 6956.898 - 7007.311: 62.5947% ( 328) 00:09:45.684 7007.311 - 7057.723: 65.1989% ( 440) 00:09:45.684 7057.723 - 7108.135: 66.9152% ( 290) 00:09:45.684 7108.135 - 7158.548: 68.1759% ( 213) 00:09:45.684 7158.548 - 7208.960: 69.7917% ( 273) 00:09:45.684 7208.960 - 7259.372: 70.7919% ( 169) 00:09:45.684 7259.372 - 7309.785: 71.7270% ( 158) 00:09:45.684 7309.785 - 7360.197: 72.6148% ( 150) 00:09:45.684 7360.197 - 7410.609: 73.5677% ( 161) 00:09:45.684 7410.609 - 7461.022: 74.4614% ( 151) 00:09:45.684 7461.022 - 7511.434: 75.3255% ( 146) 00:09:45.684 7511.434 - 7561.846: 76.1541% ( 140) 00:09:45.684 7561.846 - 7612.258: 76.8643% ( 120) 00:09:45.684 7612.258 - 7662.671: 77.5036% ( 108) 00:09:45.684 7662.671 - 7713.083: 78.0895% ( 99) 00:09:45.684 7713.083 - 7763.495: 78.6162% ( 89) 00:09:45.684 7763.495 - 7813.908: 79.2081% ( 100) 00:09:45.684 7813.908 - 7864.320: 79.5040% ( 50) 00:09:45.684 7864.320 - 7914.732: 79.6993% ( 33) 00:09:45.684 7914.732 - 7965.145: 79.9124% ( 36) 00:09:45.684 7965.145 - 8015.557: 80.2557% ( 58) 00:09:45.684 8015.557 - 8065.969: 80.4569% ( 34) 00:09:45.684 8065.969 - 8116.382: 80.6286% ( 29) 00:09:45.684 8116.382 - 8166.794: 80.8357% ( 35) 00:09:45.684 8166.794 - 8217.206: 80.9955% ( 27) 00:09:45.684 8217.206 - 8267.618: 81.1908% ( 33) 00:09:45.684 8267.618 - 8318.031: 81.3565% ( 28) 00:09:45.684 8318.031 - 8368.443: 81.8182% ( 78) 00:09:45.684 8368.443 - 8418.855: 82.0135% ( 33) 00:09:45.684 8418.855 - 8469.268: 82.1674% ( 26) 00:09:45.684 8469.268 - 8519.680: 82.2976% ( 22) 00:09:45.684 8519.680 - 8570.092: 82.4100% ( 19) 00:09:45.684 8570.092 - 8620.505: 82.7533% ( 58) 00:09:45.684 8620.505 - 8670.917: 82.8776% ( 21) 00:09:45.684 8670.917 - 8721.329: 82.9841% ( 18) 00:09:45.684 8721.329 - 8771.742: 83.0670% ( 14) 00:09:45.684 8771.742 - 8822.154: 83.1795% ( 19) 00:09:45.684 8822.154 - 8872.566: 83.2505% ( 12) 00:09:45.684 8872.566 - 8922.978: 83.3037% ( 9) 00:09:45.684 8922.978 - 8973.391: 83.3688% ( 11) 00:09:45.684 8973.391 - 9023.803: 83.4044% ( 6) 00:09:45.684 9023.803 - 9074.215: 83.4695% ( 11) 00:09:45.684 9074.215 - 9124.628: 83.5227% ( 9) 00:09:45.684 9124.628 - 9175.040: 83.6174% ( 16) 00:09:45.684 9175.040 - 9225.452: 83.7417% ( 21) 00:09:45.684 9225.452 - 9275.865: 83.8542% ( 19) 00:09:45.684 9275.865 - 9326.277: 83.9666% ( 19) 00:09:45.684 9326.277 - 9376.689: 84.0554% ( 15) 00:09:45.684 9376.689 - 9427.102: 84.1974% ( 24) 00:09:45.684 9427.102 - 9477.514: 84.3217% ( 21) 00:09:45.684 9477.514 - 9527.926: 84.4579% ( 23) 00:09:45.684 9527.926 - 9578.338: 84.5940% ( 23) 00:09:45.684 9578.338 - 9628.751: 84.7420% ( 25) 00:09:45.684 9628.751 - 9679.163: 84.9432% ( 34) 00:09:45.684 9679.163 - 9729.575: 85.1503% ( 35) 00:09:45.684 9729.575 - 9779.988: 85.2095% ( 10) 00:09:45.684 9779.988 - 9830.400: 85.2628% ( 9) 00:09:45.684 9830.400 - 9880.812: 85.3042% ( 7) 00:09:45.684 9880.812 - 9931.225: 85.3516% ( 8) 00:09:45.684 9931.225 - 9981.637: 85.3871% ( 6) 00:09:45.684 9981.637 - 10032.049: 85.4640% ( 13) 00:09:45.684 10032.049 - 10082.462: 85.5232% ( 10) 00:09:45.684 10082.462 - 10132.874: 85.6061% ( 14) 00:09:45.684 10132.874 - 10183.286: 85.6948% ( 15) 00:09:45.684 10183.286 - 10233.698: 85.7659% ( 12) 00:09:45.684 10233.698 - 10284.111: 85.9138% ( 25) 00:09:45.684 10284.111 - 10334.523: 86.0204% ( 18) 00:09:45.684 10334.523 - 10384.935: 86.1920% ( 29) 00:09:45.684 10384.935 - 10435.348: 86.3577% ( 28) 00:09:45.684 10435.348 - 10485.760: 86.5412% ( 31) 00:09:45.684 10485.760 - 10536.172: 86.7365% ( 33) 00:09:45.684 10536.172 - 10586.585: 86.8963% ( 27) 00:09:45.684 10586.585 - 10636.997: 87.1094% ( 36) 00:09:45.684 10636.997 - 10687.409: 87.3106% ( 34) 00:09:45.684 10687.409 - 10737.822: 87.5947% ( 48) 00:09:45.684 10737.822 - 10788.234: 87.8551% ( 44) 00:09:45.685 10788.234 - 10838.646: 88.0623% ( 35) 00:09:45.685 10838.646 - 10889.058: 88.2872% ( 38) 00:09:45.685 10889.058 - 10939.471: 88.5476% ( 44) 00:09:45.685 10939.471 - 10989.883: 88.8968% ( 59) 00:09:45.685 10989.883 - 11040.295: 89.1631% ( 45) 00:09:45.685 11040.295 - 11090.708: 89.3821% ( 37) 00:09:45.685 11090.708 - 11141.120: 89.6070% ( 38) 00:09:45.685 11141.120 - 11191.532: 89.8733% ( 45) 00:09:45.685 11191.532 - 11241.945: 90.2462% ( 63) 00:09:45.685 11241.945 - 11292.357: 90.6309% ( 65) 00:09:45.685 11292.357 - 11342.769: 90.8973% ( 45) 00:09:45.685 11342.769 - 11393.182: 91.1636% ( 45) 00:09:45.685 11393.182 - 11443.594: 91.4358% ( 46) 00:09:45.685 11443.594 - 11494.006: 91.6903% ( 43) 00:09:45.685 11494.006 - 11544.418: 91.9508% ( 44) 00:09:45.685 11544.418 - 11594.831: 92.1934% ( 41) 00:09:45.685 11594.831 - 11645.243: 92.4242% ( 39) 00:09:45.685 11645.243 - 11695.655: 92.6669% ( 41) 00:09:45.685 11695.655 - 11746.068: 92.9510% ( 48) 00:09:45.685 11746.068 - 11796.480: 93.2173% ( 45) 00:09:45.685 11796.480 - 11846.892: 93.4304% ( 36) 00:09:45.685 11846.892 - 11897.305: 93.7500% ( 54) 00:09:45.685 11897.305 - 11947.717: 93.9867% ( 40) 00:09:45.685 11947.717 - 11998.129: 94.1406% ( 26) 00:09:45.685 11998.129 - 12048.542: 94.3241% ( 31) 00:09:45.685 12048.542 - 12098.954: 94.4721% ( 25) 00:09:45.685 12098.954 - 12149.366: 94.6259% ( 26) 00:09:45.685 12149.366 - 12199.778: 94.7798% ( 26) 00:09:45.685 12199.778 - 12250.191: 94.9278% ( 25) 00:09:45.685 12250.191 - 12300.603: 95.0639% ( 23) 00:09:45.685 12300.603 - 12351.015: 95.2296% ( 28) 00:09:45.685 12351.015 - 12401.428: 95.3717% ( 24) 00:09:45.685 12401.428 - 12451.840: 95.5848% ( 36) 00:09:45.685 12451.840 - 12502.252: 95.9399% ( 60) 00:09:45.685 12502.252 - 12552.665: 96.2240% ( 48) 00:09:45.685 12552.665 - 12603.077: 96.4015% ( 30) 00:09:45.685 12603.077 - 12653.489: 96.5140% ( 19) 00:09:45.685 12653.489 - 12703.902: 96.6146% ( 17) 00:09:45.685 12703.902 - 12754.314: 96.7211% ( 18) 00:09:45.685 12754.314 - 12804.726: 96.8217% ( 17) 00:09:45.685 12804.726 - 12855.138: 96.9164% ( 16) 00:09:45.685 12855.138 - 12905.551: 96.9934% ( 13) 00:09:45.685 12905.551 - 13006.375: 97.1354% ( 24) 00:09:45.685 13006.375 - 13107.200: 97.2479% ( 19) 00:09:45.685 13107.200 - 13208.025: 97.3485% ( 17) 00:09:45.685 13208.025 - 13308.849: 97.4432% ( 16) 00:09:45.685 13308.849 - 13409.674: 97.5201% ( 13) 00:09:45.685 13409.674 - 13510.498: 97.6030% ( 14) 00:09:45.685 13510.498 - 13611.323: 97.7154% ( 19) 00:09:45.685 13611.323 - 13712.148: 97.8338% ( 20) 00:09:45.685 13712.148 - 13812.972: 97.9699% ( 23) 00:09:45.685 13812.972 - 13913.797: 98.2481% ( 47) 00:09:45.685 13913.797 - 14014.622: 98.4316% ( 31) 00:09:45.685 14014.622 - 14115.446: 98.6151% ( 31) 00:09:45.685 14115.446 - 14216.271: 98.6861% ( 12) 00:09:45.685 14216.271 - 14317.095: 98.7689% ( 14) 00:09:45.685 14317.095 - 14417.920: 98.8518% ( 14) 00:09:45.685 14417.920 - 14518.745: 98.9228% ( 12) 00:09:45.685 14518.745 - 14619.569: 98.9938% ( 12) 00:09:45.685 14619.569 - 14720.394: 99.0530% ( 10) 00:09:45.685 14720.394 - 14821.218: 99.1122% ( 10) 00:09:45.685 14821.218 - 14922.043: 99.1773% ( 11) 00:09:45.685 14922.043 - 15022.868: 99.2365% ( 10) 00:09:45.685 15022.868 - 15123.692: 99.2957% ( 10) 00:09:45.685 15123.692 - 15224.517: 99.3608% ( 11) 00:09:45.685 15224.517 - 15325.342: 99.4141% ( 9) 00:09:45.685 15325.342 - 15426.166: 99.4792% ( 11) 00:09:45.685 15426.166 - 15526.991: 99.5384% ( 10) 00:09:45.685 15526.991 - 15627.815: 99.6035% ( 11) 00:09:45.685 15627.815 - 15728.640: 99.6508% ( 8) 00:09:45.685 15728.640 - 15829.465: 99.6982% ( 8) 00:09:45.685 15829.465 - 15930.289: 99.7337% ( 6) 00:09:45.685 15930.289 - 16031.114: 99.7573% ( 4) 00:09:45.685 16031.114 - 16131.938: 99.7751% ( 3) 00:09:45.685 16131.938 - 16232.763: 99.7988% ( 4) 00:09:45.685 16232.763 - 16333.588: 99.8224% ( 4) 00:09:45.685 16333.588 - 16434.412: 99.8461% ( 4) 00:09:45.685 16434.412 - 16535.237: 99.8698% ( 4) 00:09:45.685 16535.237 - 16636.062: 99.8875% ( 3) 00:09:45.685 16636.062 - 16736.886: 99.9112% ( 4) 00:09:45.685 16736.886 - 16837.711: 99.9349% ( 4) 00:09:45.685 16837.711 - 16938.535: 99.9527% ( 3) 00:09:45.685 16938.535 - 17039.360: 99.9763% ( 4) 00:09:45.685 17039.360 - 17140.185: 100.0000% ( 4) 00:09:45.685 00:09:45.685 14:04:48 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:09:45.685 00:09:45.685 real 0m2.652s 00:09:45.685 user 0m2.303s 00:09:45.685 sys 0m0.239s 00:09:45.685 14:04:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:45.685 14:04:48 -- common/autotest_common.sh@10 -- # set +x 00:09:45.685 ************************************ 00:09:45.685 END TEST nvme_perf 00:09:45.685 ************************************ 00:09:45.685 14:04:48 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:45.685 14:04:48 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:09:45.685 14:04:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:45.685 14:04:48 -- common/autotest_common.sh@10 -- # set +x 00:09:45.685 ************************************ 00:09:45.685 START TEST nvme_hello_world 00:09:45.685 ************************************ 00:09:45.685 14:04:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:45.944 Initializing NVMe Controllers 00:09:45.944 Attached to 0000:00:06.0 00:09:45.944 Namespace ID: 1 size: 6GB 00:09:45.944 Attached to 0000:00:07.0 00:09:45.944 Namespace ID: 1 size: 5GB 00:09:45.944 Attached to 0000:00:09.0 00:09:45.944 Namespace ID: 1 size: 1GB 00:09:45.944 Attached to 0000:00:08.0 00:09:45.944 Namespace ID: 1 size: 4GB 00:09:45.944 Namespace ID: 2 size: 4GB 00:09:45.944 Namespace ID: 3 size: 4GB 00:09:45.944 Initialization complete. 00:09:45.944 INFO: using host memory buffer for IO 00:09:45.944 Hello world! 00:09:45.944 INFO: using host memory buffer for IO 00:09:45.944 Hello world! 00:09:45.944 INFO: using host memory buffer for IO 00:09:45.944 Hello world! 00:09:45.944 INFO: using host memory buffer for IO 00:09:45.944 Hello world! 00:09:45.944 INFO: using host memory buffer for IO 00:09:45.944 Hello world! 00:09:45.944 INFO: using host memory buffer for IO 00:09:45.944 Hello world! 00:09:45.944 00:09:45.944 real 0m0.267s 00:09:45.944 user 0m0.124s 00:09:45.944 sys 0m0.100s 00:09:45.944 14:04:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:45.944 14:04:48 -- common/autotest_common.sh@10 -- # set +x 00:09:45.944 ************************************ 00:09:45.944 END TEST nvme_hello_world 00:09:45.944 ************************************ 00:09:45.944 14:04:48 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:45.944 14:04:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:45.944 14:04:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:45.944 14:04:48 -- common/autotest_common.sh@10 -- # set +x 00:09:45.944 ************************************ 00:09:45.944 START TEST nvme_sgl 00:09:45.944 ************************************ 00:09:45.944 14:04:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:46.204 0000:00:06.0: build_io_request_0 Invalid IO length parameter 00:09:46.204 0000:00:06.0: build_io_request_1 Invalid IO length parameter 00:09:46.204 0000:00:06.0: build_io_request_3 Invalid IO length parameter 00:09:46.204 0000:00:06.0: build_io_request_8 Invalid IO length parameter 00:09:46.204 0000:00:06.0: build_io_request_9 Invalid IO length parameter 00:09:46.204 0000:00:06.0: build_io_request_11 Invalid IO length parameter 00:09:46.204 0000:00:07.0: build_io_request_0 Invalid IO length parameter 00:09:46.204 0000:00:07.0: build_io_request_1 Invalid IO length parameter 00:09:46.204 0000:00:07.0: build_io_request_3 Invalid IO length parameter 00:09:46.464 0000:00:07.0: build_io_request_8 Invalid IO length parameter 00:09:46.464 0000:00:07.0: build_io_request_9 Invalid IO length parameter 00:09:46.464 0000:00:07.0: build_io_request_11 Invalid IO length parameter 00:09:46.464 0000:00:09.0: build_io_request_0 Invalid IO length parameter 00:09:46.464 0000:00:09.0: build_io_request_1 Invalid IO length parameter 00:09:46.464 0000:00:09.0: build_io_request_2 Invalid IO length parameter 00:09:46.464 0000:00:09.0: build_io_request_3 Invalid IO length parameter 00:09:46.464 0000:00:09.0: build_io_request_4 Invalid IO length parameter 00:09:46.464 0000:00:09.0: build_io_request_5 Invalid IO length parameter 00:09:46.464 0000:00:09.0: build_io_request_6 Invalid IO length parameter 00:09:46.464 0000:00:09.0: build_io_request_7 Invalid IO length parameter 00:09:46.464 0000:00:09.0: build_io_request_8 Invalid IO length parameter 00:09:46.464 0000:00:09.0: build_io_request_9 Invalid IO length parameter 00:09:46.464 0000:00:09.0: build_io_request_10 Invalid IO length parameter 00:09:46.464 0000:00:09.0: build_io_request_11 Invalid IO length parameter 00:09:46.464 0000:00:08.0: build_io_request_0 Invalid IO length parameter 00:09:46.464 0000:00:08.0: build_io_request_1 Invalid IO length parameter 00:09:46.464 0000:00:08.0: build_io_request_2 Invalid IO length parameter 00:09:46.464 0000:00:08.0: build_io_request_3 Invalid IO length parameter 00:09:46.464 0000:00:08.0: build_io_request_4 Invalid IO length parameter 00:09:46.464 0000:00:08.0: build_io_request_5 Invalid IO length parameter 00:09:46.464 0000:00:08.0: build_io_request_6 Invalid IO length parameter 00:09:46.464 0000:00:08.0: build_io_request_7 Invalid IO length parameter 00:09:46.464 0000:00:08.0: build_io_request_8 Invalid IO length parameter 00:09:46.464 0000:00:08.0: build_io_request_9 Invalid IO length parameter 00:09:46.464 0000:00:08.0: build_io_request_10 Invalid IO length parameter 00:09:46.464 0000:00:08.0: build_io_request_11 Invalid IO length parameter 00:09:46.464 NVMe Readv/Writev Request test 00:09:46.464 Attached to 0000:00:06.0 00:09:46.464 Attached to 0000:00:07.0 00:09:46.464 Attached to 0000:00:09.0 00:09:46.464 Attached to 0000:00:08.0 00:09:46.464 0000:00:06.0: build_io_request_2 test passed 00:09:46.464 0000:00:06.0: build_io_request_4 test passed 00:09:46.464 0000:00:06.0: build_io_request_5 test passed 00:09:46.464 0000:00:06.0: build_io_request_6 test passed 00:09:46.464 0000:00:06.0: build_io_request_7 test passed 00:09:46.464 0000:00:06.0: build_io_request_10 test passed 00:09:46.464 0000:00:07.0: build_io_request_2 test passed 00:09:46.464 0000:00:07.0: build_io_request_4 test passed 00:09:46.464 0000:00:07.0: build_io_request_5 test passed 00:09:46.464 0000:00:07.0: build_io_request_6 test passed 00:09:46.464 0000:00:07.0: build_io_request_7 test passed 00:09:46.464 0000:00:07.0: build_io_request_10 test passed 00:09:46.464 Cleaning up... 00:09:46.464 00:09:46.464 real 0m0.388s 00:09:46.464 user 0m0.244s 00:09:46.464 sys 0m0.095s 00:09:46.464 14:04:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:46.464 14:04:49 -- common/autotest_common.sh@10 -- # set +x 00:09:46.464 ************************************ 00:09:46.464 END TEST nvme_sgl 00:09:46.464 ************************************ 00:09:46.464 14:04:49 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:46.464 14:04:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:46.464 14:04:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:46.464 14:04:49 -- common/autotest_common.sh@10 -- # set +x 00:09:46.464 ************************************ 00:09:46.464 START TEST nvme_e2edp 00:09:46.464 ************************************ 00:09:46.464 14:04:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:46.724 NVMe Write/Read with End-to-End data protection test 00:09:46.724 Attached to 0000:00:06.0 00:09:46.724 Attached to 0000:00:07.0 00:09:46.724 Attached to 0000:00:09.0 00:09:46.724 Attached to 0000:00:08.0 00:09:46.724 Cleaning up... 00:09:46.724 00:09:46.724 real 0m0.188s 00:09:46.724 user 0m0.060s 00:09:46.724 sys 0m0.089s 00:09:46.724 14:04:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:46.724 ************************************ 00:09:46.724 14:04:49 -- common/autotest_common.sh@10 -- # set +x 00:09:46.724 END TEST nvme_e2edp 00:09:46.724 ************************************ 00:09:46.725 14:04:49 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:46.725 14:04:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:46.725 14:04:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:46.725 14:04:49 -- common/autotest_common.sh@10 -- # set +x 00:09:46.725 ************************************ 00:09:46.725 START TEST nvme_reserve 00:09:46.725 ************************************ 00:09:46.725 14:04:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:46.985 ===================================================== 00:09:46.985 NVMe Controller at PCI bus 0, device 6, function 0 00:09:46.985 ===================================================== 00:09:46.985 Reservations: Not Supported 00:09:46.985 ===================================================== 00:09:46.985 NVMe Controller at PCI bus 0, device 7, function 0 00:09:46.985 ===================================================== 00:09:46.985 Reservations: Not Supported 00:09:46.985 ===================================================== 00:09:46.985 NVMe Controller at PCI bus 0, device 9, function 0 00:09:46.985 ===================================================== 00:09:46.985 Reservations: Not Supported 00:09:46.985 ===================================================== 00:09:46.985 NVMe Controller at PCI bus 0, device 8, function 0 00:09:46.985 ===================================================== 00:09:46.985 Reservations: Not Supported 00:09:46.985 Reservation test passed 00:09:46.985 ************************************ 00:09:46.985 END TEST nvme_reserve 00:09:46.985 ************************************ 00:09:46.986 00:09:46.986 real 0m0.195s 00:09:46.986 user 0m0.063s 00:09:46.986 sys 0m0.090s 00:09:46.986 14:04:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:46.986 14:04:49 -- common/autotest_common.sh@10 -- # set +x 00:09:46.986 14:04:49 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:46.986 14:04:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:46.986 14:04:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:46.986 14:04:49 -- common/autotest_common.sh@10 -- # set +x 00:09:46.986 ************************************ 00:09:46.986 START TEST nvme_err_injection 00:09:46.986 ************************************ 00:09:46.986 14:04:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:47.246 NVMe Error Injection test 00:09:47.246 Attached to 0000:00:06.0 00:09:47.246 Attached to 0000:00:07.0 00:09:47.246 Attached to 0000:00:09.0 00:09:47.246 Attached to 0000:00:08.0 00:09:47.246 0000:00:09.0: get features failed as expected 00:09:47.246 0000:00:08.0: get features failed as expected 00:09:47.246 0000:00:06.0: get features failed as expected 00:09:47.246 0000:00:07.0: get features failed as expected 00:09:47.246 0000:00:06.0: get features successfully as expected 00:09:47.246 0000:00:07.0: get features successfully as expected 00:09:47.246 0000:00:09.0: get features successfully as expected 00:09:47.246 0000:00:08.0: get features successfully as expected 00:09:47.246 0000:00:08.0: read failed as expected 00:09:47.246 0000:00:06.0: read failed as expected 00:09:47.246 0000:00:07.0: read failed as expected 00:09:47.246 0000:00:09.0: read failed as expected 00:09:47.246 0000:00:08.0: read successfully as expected 00:09:47.246 0000:00:06.0: read successfully as expected 00:09:47.246 0000:00:07.0: read successfully as expected 00:09:47.246 0000:00:09.0: read successfully as expected 00:09:47.246 Cleaning up... 00:09:47.246 00:09:47.246 real 0m0.255s 00:09:47.247 user 0m0.107s 00:09:47.247 sys 0m0.101s 00:09:47.247 ************************************ 00:09:47.247 END TEST nvme_err_injection 00:09:47.247 ************************************ 00:09:47.247 14:04:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:47.247 14:04:50 -- common/autotest_common.sh@10 -- # set +x 00:09:47.247 14:04:50 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:47.247 14:04:50 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:09:47.247 14:04:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:47.247 14:04:50 -- common/autotest_common.sh@10 -- # set +x 00:09:47.247 ************************************ 00:09:47.247 START TEST nvme_overhead 00:09:47.247 ************************************ 00:09:47.247 14:04:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:48.638 Initializing NVMe Controllers 00:09:48.638 Attached to 0000:00:06.0 00:09:48.638 Attached to 0000:00:07.0 00:09:48.638 Attached to 0000:00:09.0 00:09:48.638 Attached to 0000:00:08.0 00:09:48.638 Initialization complete. Launching workers. 00:09:48.638 submit (in ns) avg, min, max = 16593.7, 12042.3, 242577.7 00:09:48.638 complete (in ns) avg, min, max = 10085.6, 7812.3, 107415.4 00:09:48.638 00:09:48.638 Submit histogram 00:09:48.638 ================ 00:09:48.638 Range in us Cumulative Count 00:09:48.639 12.012 - 12.062: 0.0215% ( 1) 00:09:48.639 12.062 - 12.111: 0.0429% ( 1) 00:09:48.639 12.111 - 12.160: 0.1073% ( 3) 00:09:48.639 12.160 - 12.209: 0.1932% ( 4) 00:09:48.639 12.209 - 12.258: 0.2791% ( 4) 00:09:48.639 12.258 - 12.308: 0.4079% ( 6) 00:09:48.639 12.308 - 12.357: 0.6655% ( 12) 00:09:48.639 12.357 - 12.406: 0.8158% ( 7) 00:09:48.639 12.406 - 12.455: 0.9446% ( 6) 00:09:48.639 12.455 - 12.505: 1.1808% ( 11) 00:09:48.639 12.505 - 12.554: 1.4599% ( 13) 00:09:48.639 12.554 - 12.603: 1.8033% ( 16) 00:09:48.639 12.603 - 12.702: 2.6836% ( 41) 00:09:48.639 12.702 - 12.800: 3.4564% ( 36) 00:09:48.639 12.800 - 12.898: 3.9717% ( 24) 00:09:48.639 12.898 - 12.997: 4.5084% ( 25) 00:09:48.639 12.997 - 13.095: 5.4530% ( 44) 00:09:48.639 13.095 - 13.194: 6.1614% ( 33) 00:09:48.639 13.194 - 13.292: 7.6857% ( 71) 00:09:48.639 13.292 - 13.391: 8.8665% ( 55) 00:09:48.639 13.391 - 13.489: 9.4461% ( 27) 00:09:48.639 13.489 - 13.588: 10.5839% ( 53) 00:09:48.639 13.588 - 13.686: 11.9150% ( 62) 00:09:48.639 13.686 - 13.785: 12.7952% ( 41) 00:09:48.639 13.785 - 13.883: 13.4607% ( 31) 00:09:48.639 13.883 - 13.982: 14.1477% ( 32) 00:09:48.639 13.982 - 14.080: 14.8776% ( 34) 00:09:48.639 14.080 - 14.178: 15.6934% ( 38) 00:09:48.639 14.178 - 14.277: 16.2301% ( 25) 00:09:48.639 14.277 - 14.375: 16.7454% ( 24) 00:09:48.639 14.375 - 14.474: 17.1103% ( 17) 00:09:48.639 14.474 - 14.572: 17.6900% ( 27) 00:09:48.639 14.572 - 14.671: 18.2052% ( 24) 00:09:48.639 14.671 - 14.769: 18.7849% ( 27) 00:09:48.639 14.769 - 14.868: 19.2572% ( 22) 00:09:48.639 14.868 - 14.966: 19.8154% ( 26) 00:09:48.639 14.966 - 15.065: 20.3091% ( 23) 00:09:48.639 15.065 - 15.163: 20.5668% ( 12) 00:09:48.639 15.163 - 15.262: 20.9747% ( 19) 00:09:48.639 15.262 - 15.360: 21.4040% ( 20) 00:09:48.639 15.360 - 15.458: 21.7905% ( 18) 00:09:48.639 15.458 - 15.557: 22.3486% ( 26) 00:09:48.639 15.557 - 15.655: 22.9498% ( 28) 00:09:48.639 15.655 - 15.754: 23.5938% ( 30) 00:09:48.639 15.754 - 15.852: 24.0447% ( 21) 00:09:48.639 15.852 - 15.951: 24.5814% ( 25) 00:09:48.639 15.951 - 16.049: 25.3757% ( 37) 00:09:48.639 16.049 - 16.148: 26.7497% ( 64) 00:09:48.639 16.148 - 16.246: 30.3349% ( 167) 00:09:48.639 16.246 - 16.345: 35.9811% ( 263) 00:09:48.639 16.345 - 16.443: 41.8205% ( 272) 00:09:48.639 16.443 - 16.542: 48.4757% ( 310) 00:09:48.639 16.542 - 16.640: 56.1400% ( 357) 00:09:48.639 16.640 - 16.738: 64.1692% ( 374) 00:09:48.639 16.738 - 16.837: 70.4165% ( 291) 00:09:48.639 16.837 - 16.935: 76.1271% ( 266) 00:09:48.639 16.935 - 17.034: 81.0648% ( 230) 00:09:48.639 17.034 - 17.132: 85.0794% ( 187) 00:09:48.639 17.132 - 17.231: 87.5912% ( 117) 00:09:48.639 17.231 - 17.329: 89.4590% ( 87) 00:09:48.639 17.329 - 17.428: 90.7471% ( 60) 00:09:48.639 17.428 - 17.526: 91.5200% ( 36) 00:09:48.639 17.526 - 17.625: 92.0352% ( 24) 00:09:48.639 17.625 - 17.723: 92.4431% ( 19) 00:09:48.639 17.723 - 17.822: 92.7007% ( 12) 00:09:48.639 17.822 - 17.920: 92.7866% ( 4) 00:09:48.639 17.920 - 18.018: 92.9798% ( 9) 00:09:48.639 18.018 - 18.117: 93.0872% ( 5) 00:09:48.639 18.117 - 18.215: 93.1730% ( 4) 00:09:48.639 18.215 - 18.314: 93.2374% ( 3) 00:09:48.639 18.314 - 18.412: 93.3018% ( 3) 00:09:48.639 18.412 - 18.511: 93.3663% ( 3) 00:09:48.639 18.511 - 18.609: 93.4736% ( 5) 00:09:48.639 18.609 - 18.708: 93.5595% ( 4) 00:09:48.639 18.708 - 18.806: 93.7742% ( 10) 00:09:48.639 18.806 - 18.905: 93.8600% ( 4) 00:09:48.639 18.905 - 19.003: 93.9030% ( 2) 00:09:48.639 19.003 - 19.102: 93.9888% ( 4) 00:09:48.639 19.102 - 19.200: 94.0318% ( 2) 00:09:48.639 19.200 - 19.298: 94.0532% ( 1) 00:09:48.639 19.298 - 19.397: 94.1606% ( 5) 00:09:48.639 19.397 - 19.495: 94.2465% ( 4) 00:09:48.639 19.495 - 19.594: 94.2679% ( 1) 00:09:48.639 19.594 - 19.692: 94.4397% ( 8) 00:09:48.639 19.692 - 19.791: 94.5255% ( 4) 00:09:48.639 19.791 - 19.889: 94.6114% ( 4) 00:09:48.639 19.889 - 19.988: 94.7188% ( 5) 00:09:48.639 19.988 - 20.086: 94.8261% ( 5) 00:09:48.639 20.086 - 20.185: 94.9334% ( 5) 00:09:48.639 20.185 - 20.283: 95.1481% ( 10) 00:09:48.639 20.283 - 20.382: 95.2125% ( 3) 00:09:48.639 20.382 - 20.480: 95.3413% ( 6) 00:09:48.639 20.480 - 20.578: 95.4058% ( 3) 00:09:48.639 20.578 - 20.677: 95.5131% ( 5) 00:09:48.639 20.677 - 20.775: 95.5560% ( 2) 00:09:48.639 20.775 - 20.874: 95.7063% ( 7) 00:09:48.639 20.874 - 20.972: 95.7922% ( 4) 00:09:48.639 20.972 - 21.071: 95.8351% ( 2) 00:09:48.639 21.071 - 21.169: 95.8995% ( 3) 00:09:48.639 21.169 - 21.268: 96.0069% ( 5) 00:09:48.639 21.268 - 21.366: 96.0713% ( 3) 00:09:48.639 21.366 - 21.465: 96.1571% ( 4) 00:09:48.639 21.465 - 21.563: 96.2645% ( 5) 00:09:48.639 21.563 - 21.662: 96.3718% ( 5) 00:09:48.639 21.662 - 21.760: 96.4148% ( 2) 00:09:48.639 21.760 - 21.858: 96.5006% ( 4) 00:09:48.639 21.858 - 21.957: 96.5650% ( 3) 00:09:48.639 21.957 - 22.055: 96.6939% ( 6) 00:09:48.639 22.055 - 22.154: 96.7583% ( 3) 00:09:48.639 22.154 - 22.252: 96.8441% ( 4) 00:09:48.639 22.252 - 22.351: 96.9300% ( 4) 00:09:48.639 22.351 - 22.449: 96.9515% ( 1) 00:09:48.639 22.646 - 22.745: 96.9729% ( 1) 00:09:48.639 22.745 - 22.843: 97.0159% ( 2) 00:09:48.639 22.843 - 22.942: 97.0374% ( 1) 00:09:48.639 22.942 - 23.040: 97.0803% ( 2) 00:09:48.639 23.040 - 23.138: 97.1018% ( 1) 00:09:48.639 23.138 - 23.237: 97.2306% ( 6) 00:09:48.639 23.434 - 23.532: 97.2735% ( 2) 00:09:48.639 23.532 - 23.631: 97.3379% ( 3) 00:09:48.639 23.631 - 23.729: 97.3809% ( 2) 00:09:48.639 23.729 - 23.828: 97.4023% ( 1) 00:09:48.639 23.828 - 23.926: 97.4238% ( 1) 00:09:48.639 23.926 - 24.025: 97.4882% ( 3) 00:09:48.639 24.025 - 24.123: 97.5311% ( 2) 00:09:48.639 24.123 - 24.222: 97.6170% ( 4) 00:09:48.639 24.222 - 24.320: 97.6814% ( 3) 00:09:48.639 24.320 - 24.418: 97.7029% ( 1) 00:09:48.639 24.418 - 24.517: 97.7243% ( 1) 00:09:48.639 24.517 - 24.615: 97.8317% ( 5) 00:09:48.639 24.615 - 24.714: 97.9605% ( 6) 00:09:48.639 24.714 - 24.812: 98.0678% ( 5) 00:09:48.639 24.812 - 24.911: 98.1752% ( 5) 00:09:48.639 24.911 - 25.009: 98.2611% ( 4) 00:09:48.639 25.009 - 25.108: 98.3040% ( 2) 00:09:48.639 25.108 - 25.206: 98.3899% ( 4) 00:09:48.639 25.206 - 25.403: 98.4972% ( 5) 00:09:48.639 25.403 - 25.600: 98.6690% ( 8) 00:09:48.639 25.600 - 25.797: 98.7978% ( 6) 00:09:48.639 25.797 - 25.994: 98.8192% ( 1) 00:09:48.639 25.994 - 26.191: 98.9910% ( 8) 00:09:48.639 26.191 - 26.388: 99.0339% ( 2) 00:09:48.639 26.388 - 26.585: 99.0983% ( 3) 00:09:48.639 26.585 - 26.782: 99.1842% ( 4) 00:09:48.639 26.782 - 26.978: 99.2915% ( 5) 00:09:48.639 26.978 - 27.175: 99.3345% ( 2) 00:09:48.639 27.175 - 27.372: 99.4204% ( 4) 00:09:48.639 27.372 - 27.569: 99.4633% ( 2) 00:09:48.639 27.569 - 27.766: 99.4848% ( 1) 00:09:48.639 27.963 - 28.160: 99.5062% ( 1) 00:09:48.639 28.357 - 28.554: 99.5277% ( 1) 00:09:48.639 28.554 - 28.751: 99.5492% ( 1) 00:09:48.639 29.538 - 29.735: 99.5706% ( 1) 00:09:48.639 29.735 - 29.932: 99.5921% ( 1) 00:09:48.639 29.932 - 30.129: 99.6136% ( 1) 00:09:48.639 30.326 - 30.523: 99.6565% ( 2) 00:09:48.639 31.508 - 31.705: 99.6780% ( 1) 00:09:48.639 31.902 - 32.098: 99.6994% ( 1) 00:09:48.639 32.886 - 33.083: 99.7209% ( 1) 00:09:48.639 34.265 - 34.462: 99.7424% ( 1) 00:09:48.639 35.840 - 36.037: 99.7638% ( 1) 00:09:48.639 36.037 - 36.234: 99.7853% ( 1) 00:09:48.639 36.431 - 36.628: 99.8068% ( 1) 00:09:48.639 36.628 - 36.825: 99.8283% ( 1) 00:09:48.639 37.612 - 37.809: 99.8497% ( 1) 00:09:48.639 39.582 - 39.778: 99.8712% ( 1) 00:09:48.639 41.945 - 42.142: 99.8927% ( 1) 00:09:48.640 56.714 - 57.108: 99.9141% ( 1) 00:09:48.640 74.043 - 74.437: 99.9356% ( 1) 00:09:48.640 201.649 - 203.225: 99.9571% ( 1) 00:09:48.640 226.855 - 228.431: 99.9785% ( 1) 00:09:48.640 241.034 - 242.609: 100.0000% ( 1) 00:09:48.640 00:09:48.640 Complete histogram 00:09:48.640 ================== 00:09:48.640 Range in us Cumulative Count 00:09:48.640 7.778 - 7.828: 0.0215% ( 1) 00:09:48.640 7.926 - 7.975: 0.1073% ( 4) 00:09:48.640 7.975 - 8.025: 0.2147% ( 5) 00:09:48.640 8.025 - 8.074: 0.7514% ( 25) 00:09:48.640 8.074 - 8.123: 1.3954% ( 30) 00:09:48.640 8.123 - 8.172: 2.1683% ( 36) 00:09:48.640 8.172 - 8.222: 2.9626% ( 37) 00:09:48.640 8.222 - 8.271: 3.7140% ( 35) 00:09:48.640 8.271 - 8.320: 4.5513% ( 39) 00:09:48.640 8.320 - 8.369: 5.5174% ( 45) 00:09:48.640 8.369 - 8.418: 6.4191% ( 42) 00:09:48.640 8.418 - 8.468: 7.4710% ( 49) 00:09:48.640 8.468 - 8.517: 8.6733% ( 56) 00:09:48.640 8.517 - 8.566: 9.7896% ( 52) 00:09:48.640 8.566 - 8.615: 10.7772% ( 46) 00:09:48.640 8.615 - 8.665: 11.9579% ( 55) 00:09:48.640 8.665 - 8.714: 12.8596% ( 42) 00:09:48.640 8.714 - 8.763: 13.7398% ( 41) 00:09:48.640 8.763 - 8.812: 14.8132% ( 50) 00:09:48.640 8.812 - 8.862: 15.7578% ( 44) 00:09:48.640 8.862 - 8.911: 16.4663% ( 33) 00:09:48.640 8.911 - 8.960: 17.3250% ( 40) 00:09:48.640 8.960 - 9.009: 18.0764% ( 35) 00:09:48.640 9.009 - 9.058: 18.7634% ( 32) 00:09:48.640 9.058 - 9.108: 19.4719% ( 33) 00:09:48.640 9.108 - 9.157: 20.0515% ( 27) 00:09:48.640 9.157 - 9.206: 20.5024% ( 21) 00:09:48.640 9.206 - 9.255: 20.9532% ( 21) 00:09:48.640 9.255 - 9.305: 21.3611% ( 19) 00:09:48.640 9.305 - 9.354: 21.6187% ( 12) 00:09:48.640 9.354 - 9.403: 22.0696% ( 21) 00:09:48.640 9.403 - 9.452: 22.4131% ( 16) 00:09:48.640 9.452 - 9.502: 22.9068% ( 23) 00:09:48.640 9.502 - 9.551: 23.3577% ( 21) 00:09:48.640 9.551 - 9.600: 24.4955% ( 53) 00:09:48.640 9.600 - 9.649: 26.5779% ( 97) 00:09:48.640 9.649 - 9.698: 29.1112% ( 118) 00:09:48.640 9.698 - 9.748: 31.6874% ( 120) 00:09:48.640 9.748 - 9.797: 34.8862% ( 149) 00:09:48.640 9.797 - 9.846: 37.8489% ( 138) 00:09:48.640 9.846 - 9.895: 41.1121% ( 152) 00:09:48.640 9.895 - 9.945: 44.4611% ( 156) 00:09:48.640 9.945 - 9.994: 47.8961% ( 160) 00:09:48.640 9.994 - 10.043: 52.4903% ( 214) 00:09:48.640 10.043 - 10.092: 57.1919% ( 219) 00:09:48.640 10.092 - 10.142: 62.6664% ( 255) 00:09:48.640 10.142 - 10.191: 67.8832% ( 243) 00:09:48.640 10.191 - 10.240: 72.9927% ( 238) 00:09:48.640 10.240 - 10.289: 78.0378% ( 235) 00:09:48.640 10.289 - 10.338: 82.4818% ( 207) 00:09:48.640 10.338 - 10.388: 85.8738% ( 158) 00:09:48.640 10.388 - 10.437: 88.6003% ( 127) 00:09:48.640 10.437 - 10.486: 90.3607% ( 82) 00:09:48.640 10.486 - 10.535: 91.8635% ( 70) 00:09:48.640 10.535 - 10.585: 92.8510% ( 46) 00:09:48.640 10.585 - 10.634: 93.6453% ( 37) 00:09:48.640 10.634 - 10.683: 94.3538% ( 33) 00:09:48.640 10.683 - 10.732: 94.6329% ( 13) 00:09:48.640 10.732 - 10.782: 94.9549% ( 15) 00:09:48.640 10.782 - 10.831: 95.2769% ( 15) 00:09:48.640 10.831 - 10.880: 95.4702% ( 9) 00:09:48.640 10.880 - 10.929: 95.6204% ( 7) 00:09:48.640 10.929 - 10.978: 95.7492% ( 6) 00:09:48.640 10.978 - 11.028: 95.9210% ( 8) 00:09:48.640 11.028 - 11.077: 96.0927% ( 8) 00:09:48.640 11.077 - 11.126: 96.1142% ( 1) 00:09:48.640 11.126 - 11.175: 96.2001% ( 4) 00:09:48.640 11.175 - 11.225: 96.2430% ( 2) 00:09:48.640 11.372 - 11.422: 96.2860% ( 2) 00:09:48.640 11.422 - 11.471: 96.3504% ( 3) 00:09:48.640 11.569 - 11.618: 96.3718% ( 1) 00:09:48.640 11.618 - 11.668: 96.3933% ( 1) 00:09:48.640 11.717 - 11.766: 96.4148% ( 1) 00:09:48.640 11.766 - 11.815: 96.4577% ( 2) 00:09:48.640 11.815 - 11.865: 96.4792% ( 1) 00:09:48.640 11.865 - 11.914: 96.5221% ( 2) 00:09:48.640 11.914 - 11.963: 96.5436% ( 1) 00:09:48.640 12.012 - 12.062: 96.5865% ( 2) 00:09:48.640 12.062 - 12.111: 96.6295% ( 2) 00:09:48.640 12.111 - 12.160: 96.6509% ( 1) 00:09:48.640 12.160 - 12.209: 96.6724% ( 1) 00:09:48.640 12.209 - 12.258: 96.6939% ( 1) 00:09:48.640 12.308 - 12.357: 96.7153% ( 1) 00:09:48.640 12.455 - 12.505: 96.7368% ( 1) 00:09:48.640 12.505 - 12.554: 96.7583% ( 1) 00:09:48.640 12.554 - 12.603: 96.8012% ( 2) 00:09:48.640 12.702 - 12.800: 96.8227% ( 1) 00:09:48.640 12.800 - 12.898: 96.8871% ( 3) 00:09:48.640 12.997 - 13.095: 96.9085% ( 1) 00:09:48.640 13.194 - 13.292: 96.9300% ( 1) 00:09:48.640 13.292 - 13.391: 96.9515% ( 1) 00:09:48.640 13.489 - 13.588: 97.0374% ( 4) 00:09:48.640 13.588 - 13.686: 97.1232% ( 4) 00:09:48.640 13.686 - 13.785: 97.1662% ( 2) 00:09:48.640 13.785 - 13.883: 97.2950% ( 6) 00:09:48.640 13.883 - 13.982: 97.3594% ( 3) 00:09:48.640 13.982 - 14.080: 97.4453% ( 4) 00:09:48.640 14.080 - 14.178: 97.5097% ( 3) 00:09:48.640 14.178 - 14.277: 97.6170% ( 5) 00:09:48.640 14.277 - 14.375: 97.6385% ( 1) 00:09:48.640 14.375 - 14.474: 97.7029% ( 3) 00:09:48.640 14.474 - 14.572: 97.7458% ( 2) 00:09:48.640 14.572 - 14.671: 97.7888% ( 2) 00:09:48.640 14.671 - 14.769: 97.8102% ( 1) 00:09:48.640 14.769 - 14.868: 97.8532% ( 2) 00:09:48.640 14.868 - 14.966: 97.8746% ( 1) 00:09:48.640 15.065 - 15.163: 97.9176% ( 2) 00:09:48.640 15.163 - 15.262: 97.9390% ( 1) 00:09:48.640 15.262 - 15.360: 97.9605% ( 1) 00:09:48.640 15.360 - 15.458: 98.0034% ( 2) 00:09:48.640 15.458 - 15.557: 98.0249% ( 1) 00:09:48.640 15.754 - 15.852: 98.0464% ( 1) 00:09:48.640 16.049 - 16.148: 98.0678% ( 1) 00:09:48.640 16.246 - 16.345: 98.1108% ( 2) 00:09:48.640 16.542 - 16.640: 98.1322% ( 1) 00:09:48.640 16.837 - 16.935: 98.1537% ( 1) 00:09:48.640 17.034 - 17.132: 98.1752% ( 1) 00:09:48.640 17.132 - 17.231: 98.2181% ( 2) 00:09:48.640 17.329 - 17.428: 98.2396% ( 1) 00:09:48.640 17.625 - 17.723: 98.2825% ( 2) 00:09:48.640 17.822 - 17.920: 98.3469% ( 3) 00:09:48.640 17.920 - 18.018: 98.3684% ( 1) 00:09:48.640 18.018 - 18.117: 98.3899% ( 1) 00:09:48.640 18.117 - 18.215: 98.4113% ( 1) 00:09:48.640 18.314 - 18.412: 98.4543% ( 2) 00:09:48.640 18.412 - 18.511: 98.5401% ( 4) 00:09:48.640 18.511 - 18.609: 98.6260% ( 4) 00:09:48.640 18.609 - 18.708: 98.7334% ( 5) 00:09:48.640 18.708 - 18.806: 98.7978% ( 3) 00:09:48.640 18.806 - 18.905: 98.8622% ( 3) 00:09:48.640 18.905 - 19.003: 98.9480% ( 4) 00:09:48.640 19.102 - 19.200: 99.0125% ( 3) 00:09:48.640 19.298 - 19.397: 99.0983% ( 4) 00:09:48.640 19.397 - 19.495: 99.1198% ( 1) 00:09:48.640 19.495 - 19.594: 99.1627% ( 2) 00:09:48.640 19.594 - 19.692: 99.1842% ( 1) 00:09:48.640 19.692 - 19.791: 99.2701% ( 4) 00:09:48.640 19.889 - 19.988: 99.2915% ( 1) 00:09:48.640 19.988 - 20.086: 99.3130% ( 1) 00:09:48.640 20.086 - 20.185: 99.3559% ( 2) 00:09:48.640 20.185 - 20.283: 99.4204% ( 3) 00:09:48.640 20.283 - 20.382: 99.4418% ( 1) 00:09:48.640 20.382 - 20.480: 99.4633% ( 1) 00:09:48.640 20.578 - 20.677: 99.4848% ( 1) 00:09:48.640 20.677 - 20.775: 99.5277% ( 2) 00:09:48.640 20.874 - 20.972: 99.5492% ( 1) 00:09:48.640 20.972 - 21.071: 99.5706% ( 1) 00:09:48.640 21.858 - 21.957: 99.6136% ( 2) 00:09:48.640 22.942 - 23.040: 99.6350% ( 1) 00:09:48.640 23.631 - 23.729: 99.6565% ( 1) 00:09:48.640 24.615 - 24.714: 99.6780% ( 1) 00:09:48.640 26.191 - 26.388: 99.7424% ( 3) 00:09:48.640 27.175 - 27.372: 99.7638% ( 1) 00:09:48.640 27.372 - 27.569: 99.7853% ( 1) 00:09:48.640 27.569 - 27.766: 99.8068% ( 1) 00:09:48.640 28.554 - 28.751: 99.8283% ( 1) 00:09:48.640 29.538 - 29.735: 99.8712% ( 2) 00:09:48.640 31.311 - 31.508: 99.8927% ( 1) 00:09:48.640 39.385 - 39.582: 99.9141% ( 1) 00:09:48.640 46.474 - 46.671: 99.9356% ( 1) 00:09:48.640 49.231 - 49.428: 99.9571% ( 1) 00:09:48.640 94.917 - 95.311: 99.9785% ( 1) 00:09:48.640 107.126 - 107.914: 100.0000% ( 1) 00:09:48.640 00:09:48.640 ************************************ 00:09:48.640 END TEST nvme_overhead 00:09:48.640 ************************************ 00:09:48.640 00:09:48.640 real 0m1.219s 00:09:48.640 user 0m1.069s 00:09:48.640 sys 0m0.095s 00:09:48.640 14:04:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:48.640 14:04:51 -- common/autotest_common.sh@10 -- # set +x 00:09:48.640 14:04:51 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:48.640 14:04:51 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:09:48.641 14:04:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:48.641 14:04:51 -- common/autotest_common.sh@10 -- # set +x 00:09:48.641 ************************************ 00:09:48.641 START TEST nvme_arbitration 00:09:48.641 ************************************ 00:09:48.641 14:04:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:51.945 Initializing NVMe Controllers 00:09:51.945 Attached to 0000:00:06.0 00:09:51.945 Attached to 0000:00:07.0 00:09:51.945 Attached to 0000:00:09.0 00:09:51.945 Attached to 0000:00:08.0 00:09:51.945 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:09:51.945 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:09:51.945 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:09:51.945 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:09:51.945 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:09:51.945 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:09:51.945 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:09:51.945 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:09:51.945 Initialization complete. Launching workers. 00:09:51.945 Starting thread on core 1 with urgent priority queue 00:09:51.945 Starting thread on core 2 with urgent priority queue 00:09:51.945 Starting thread on core 3 with urgent priority queue 00:09:51.945 Starting thread on core 0 with urgent priority queue 00:09:51.945 QEMU NVMe Ctrl (12340 ) core 0: 832.00 IO/s 120.19 secs/100000 ios 00:09:51.945 QEMU NVMe Ctrl (12342 ) core 0: 832.00 IO/s 120.19 secs/100000 ios 00:09:51.945 QEMU NVMe Ctrl (12341 ) core 1: 832.00 IO/s 120.19 secs/100000 ios 00:09:51.945 QEMU NVMe Ctrl (12342 ) core 1: 832.00 IO/s 120.19 secs/100000 ios 00:09:51.946 QEMU NVMe Ctrl (12343 ) core 2: 789.33 IO/s 126.69 secs/100000 ios 00:09:51.946 QEMU NVMe Ctrl (12342 ) core 3: 832.00 IO/s 120.19 secs/100000 ios 00:09:51.946 ======================================================== 00:09:51.946 00:09:51.946 ************************************ 00:09:51.946 END TEST nvme_arbitration 00:09:51.946 ************************************ 00:09:51.946 00:09:51.946 real 0m3.417s 00:09:51.946 user 0m9.475s 00:09:51.946 sys 0m0.134s 00:09:51.946 14:04:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:51.946 14:04:54 -- common/autotest_common.sh@10 -- # set +x 00:09:51.946 14:04:54 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:09:51.946 14:04:54 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:09:51.946 14:04:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:51.946 14:04:54 -- common/autotest_common.sh@10 -- # set +x 00:09:51.946 ************************************ 00:09:51.946 START TEST nvme_single_aen 00:09:51.946 ************************************ 00:09:51.946 14:04:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:09:52.208 [2024-12-08 14:04:54.876324] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:52.208 [2024-12-08 14:04:54.876601] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:52.208 [2024-12-08 14:04:55.014703] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:09:52.209 [2024-12-08 14:04:55.017235] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:09:52.209 [2024-12-08 14:04:55.018850] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:09:52.209 [2024-12-08 14:04:55.020928] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:09:52.209 Asynchronous Event Request test 00:09:52.209 Attached to 0000:00:06.0 00:09:52.209 Attached to 0000:00:07.0 00:09:52.209 Attached to 0000:00:09.0 00:09:52.209 Attached to 0000:00:08.0 00:09:52.209 Reset controller to setup AER completions for this process 00:09:52.209 Registering asynchronous event callbacks... 00:09:52.209 Getting orig temperature thresholds of all controllers 00:09:52.209 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:52.209 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:52.209 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:52.209 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:52.209 Setting all controllers temperature threshold low to trigger AER 00:09:52.209 Waiting for all controllers temperature threshold to be set lower 00:09:52.209 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:52.209 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:09:52.209 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:52.209 aer_cb - Resetting Temp Threshold for device: 0000:00:07.0 00:09:52.209 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:52.209 aer_cb - Resetting Temp Threshold for device: 0000:00:09.0 00:09:52.209 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:52.209 aer_cb - Resetting Temp Threshold for device: 0000:00:08.0 00:09:52.209 Waiting for all controllers to trigger AER and reset threshold 00:09:52.209 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:52.209 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:52.209 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:52.209 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:52.209 Cleaning up... 00:09:52.209 00:09:52.209 real 0m0.231s 00:09:52.209 user 0m0.064s 00:09:52.209 sys 0m0.122s 00:09:52.209 14:04:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:52.209 14:04:55 -- common/autotest_common.sh@10 -- # set +x 00:09:52.209 ************************************ 00:09:52.209 END TEST nvme_single_aen 00:09:52.209 ************************************ 00:09:52.209 14:04:55 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:09:52.209 14:04:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:52.209 14:04:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:52.209 14:04:55 -- common/autotest_common.sh@10 -- # set +x 00:09:52.470 ************************************ 00:09:52.470 START TEST nvme_doorbell_aers 00:09:52.470 ************************************ 00:09:52.470 14:04:55 -- common/autotest_common.sh@1114 -- # nvme_doorbell_aers 00:09:52.470 14:04:55 -- nvme/nvme.sh@70 -- # bdfs=() 00:09:52.470 14:04:55 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:09:52.470 14:04:55 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:09:52.470 14:04:55 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:09:52.470 14:04:55 -- common/autotest_common.sh@1508 -- # bdfs=() 00:09:52.470 14:04:55 -- common/autotest_common.sh@1508 -- # local bdfs 00:09:52.470 14:04:55 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:52.470 14:04:55 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:52.470 14:04:55 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:09:52.470 14:04:55 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:09:52.470 14:04:55 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:09:52.470 14:04:55 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:52.470 14:04:55 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:06.0' 00:09:52.732 [2024-12-08 14:04:55.421882] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63890) is not found. Dropping the request. 00:10:02.786 Executing: test_write_invalid_db 00:10:02.786 Waiting for AER completion... 00:10:02.786 Failure: test_write_invalid_db 00:10:02.786 00:10:02.786 Executing: test_invalid_db_write_overflow_sq 00:10:02.786 Waiting for AER completion... 00:10:02.786 Failure: test_invalid_db_write_overflow_sq 00:10:02.786 00:10:02.786 Executing: test_invalid_db_write_overflow_cq 00:10:02.786 Waiting for AER completion... 00:10:02.786 Failure: test_invalid_db_write_overflow_cq 00:10:02.786 00:10:02.786 14:05:05 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:02.786 14:05:05 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:07.0' 00:10:02.786 [2024-12-08 14:05:05.451572] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63890) is not found. Dropping the request. 00:10:12.754 Executing: test_write_invalid_db 00:10:12.754 Waiting for AER completion... 00:10:12.754 Failure: test_write_invalid_db 00:10:12.754 00:10:12.754 Executing: test_invalid_db_write_overflow_sq 00:10:12.754 Waiting for AER completion... 00:10:12.754 Failure: test_invalid_db_write_overflow_sq 00:10:12.754 00:10:12.754 Executing: test_invalid_db_write_overflow_cq 00:10:12.754 Waiting for AER completion... 00:10:12.754 Failure: test_invalid_db_write_overflow_cq 00:10:12.754 00:10:12.754 14:05:15 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:12.754 14:05:15 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:08.0' 00:10:12.754 [2024-12-08 14:05:15.483923] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63890) is not found. Dropping the request. 00:10:22.727 Executing: test_write_invalid_db 00:10:22.727 Waiting for AER completion... 00:10:22.727 Failure: test_write_invalid_db 00:10:22.727 00:10:22.727 Executing: test_invalid_db_write_overflow_sq 00:10:22.727 Waiting for AER completion... 00:10:22.727 Failure: test_invalid_db_write_overflow_sq 00:10:22.727 00:10:22.727 Executing: test_invalid_db_write_overflow_cq 00:10:22.727 Waiting for AER completion... 00:10:22.727 Failure: test_invalid_db_write_overflow_cq 00:10:22.727 00:10:22.727 14:05:25 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:22.727 14:05:25 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:09.0' 00:10:22.727 [2024-12-08 14:05:25.510099] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63890) is not found. Dropping the request. 00:10:32.693 Executing: test_write_invalid_db 00:10:32.693 Waiting for AER completion... 00:10:32.693 Failure: test_write_invalid_db 00:10:32.693 00:10:32.693 Executing: test_invalid_db_write_overflow_sq 00:10:32.693 Waiting for AER completion... 00:10:32.693 Failure: test_invalid_db_write_overflow_sq 00:10:32.693 00:10:32.693 Executing: test_invalid_db_write_overflow_cq 00:10:32.693 Waiting for AER completion... 00:10:32.693 Failure: test_invalid_db_write_overflow_cq 00:10:32.693 00:10:32.693 00:10:32.693 real 0m40.212s 00:10:32.693 user 0m33.972s 00:10:32.693 sys 0m5.825s 00:10:32.693 14:05:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:32.693 14:05:35 -- common/autotest_common.sh@10 -- # set +x 00:10:32.693 ************************************ 00:10:32.693 END TEST nvme_doorbell_aers 00:10:32.693 ************************************ 00:10:32.693 14:05:35 -- nvme/nvme.sh@97 -- # uname 00:10:32.693 14:05:35 -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:10:32.693 14:05:35 -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:10:32.693 14:05:35 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:10:32.693 14:05:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:32.693 14:05:35 -- common/autotest_common.sh@10 -- # set +x 00:10:32.693 ************************************ 00:10:32.693 START TEST nvme_multi_aen 00:10:32.693 ************************************ 00:10:32.693 14:05:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:10:32.693 [2024-12-08 14:05:35.427373] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:32.693 [2024-12-08 14:05:35.427644] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:32.693 [2024-12-08 14:05:35.557742] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:10:32.693 [2024-12-08 14:05:35.557876] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63890) is not found. Dropping the request. 00:10:32.693 [2024-12-08 14:05:35.557957] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63890) is not found. Dropping the request. 00:10:32.693 [2024-12-08 14:05:35.557995] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63890) is not found. Dropping the request. 00:10:32.693 [2024-12-08 14:05:35.559671] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:10:32.693 [2024-12-08 14:05:35.559769] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63890) is not found. Dropping the request. 00:10:32.693 [2024-12-08 14:05:35.559867] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63890) is not found. Dropping the request. 00:10:32.693 [2024-12-08 14:05:35.559897] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63890) is not found. Dropping the request. 00:10:32.693 [2024-12-08 14:05:35.560874] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:10:32.693 [2024-12-08 14:05:35.560945] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63890) is not found. Dropping the request. 00:10:32.693 [2024-12-08 14:05:35.561012] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63890) is not found. Dropping the request. 00:10:32.693 [2024-12-08 14:05:35.561041] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63890) is not found. Dropping the request. 00:10:32.693 [2024-12-08 14:05:35.561994] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:10:32.693 [2024-12-08 14:05:35.562060] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63890) is not found. Dropping the request. 00:10:32.693 [2024-12-08 14:05:35.562121] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63890) is not found. Dropping the request. 00:10:32.693 [2024-12-08 14:05:35.562149] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63890) is not found. Dropping the request. 00:10:32.693 [2024-12-08 14:05:35.569712] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:32.693 [2024-12-08 14:05:35.569949] [ DPDK EAL parameters: aer -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 Child process pid: 64411 00:10:32.693 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:32.950 [Child] Asynchronous Event Request test 00:10:32.950 [Child] Attached to 0000:00:06.0 00:10:32.950 [Child] Attached to 0000:00:07.0 00:10:32.950 [Child] Attached to 0000:00:09.0 00:10:32.950 [Child] Attached to 0000:00:08.0 00:10:32.950 [Child] Registering asynchronous event callbacks... 00:10:32.950 [Child] Getting orig temperature thresholds of all controllers 00:10:32.950 [Child] 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:32.950 [Child] 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:32.950 [Child] 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:32.950 [Child] 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:32.950 [Child] Waiting for all controllers to trigger AER and reset threshold 00:10:32.950 [Child] 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:32.950 [Child] 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:32.950 [Child] 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:32.950 [Child] 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:32.950 [Child] 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:32.950 [Child] 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:32.950 [Child] 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:32.950 [Child] 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:32.950 [Child] Cleaning up... 00:10:32.950 Asynchronous Event Request test 00:10:32.950 Attached to 0000:00:06.0 00:10:32.950 Attached to 0000:00:07.0 00:10:32.950 Attached to 0000:00:09.0 00:10:32.950 Attached to 0000:00:08.0 00:10:32.950 Reset controller to setup AER completions for this process 00:10:32.950 Registering asynchronous event callbacks... 00:10:32.950 Getting orig temperature thresholds of all controllers 00:10:32.950 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:32.950 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:32.950 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:32.950 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:32.950 Setting all controllers temperature threshold low to trigger AER 00:10:32.950 Waiting for all controllers temperature threshold to be set lower 00:10:32.950 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:32.951 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:10:32.951 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:32.951 aer_cb - Resetting Temp Threshold for device: 0000:00:07.0 00:10:32.951 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:32.951 aer_cb - Resetting Temp Threshold for device: 0000:00:09.0 00:10:32.951 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:32.951 aer_cb - Resetting Temp Threshold for device: 0000:00:08.0 00:10:32.951 Waiting for all controllers to trigger AER and reset threshold 00:10:32.951 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:32.951 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:32.951 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:32.951 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:32.951 Cleaning up... 00:10:32.951 00:10:32.951 real 0m0.406s 00:10:32.951 user 0m0.122s 00:10:32.951 sys 0m0.177s 00:10:32.951 14:05:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:32.951 14:05:35 -- common/autotest_common.sh@10 -- # set +x 00:10:32.951 ************************************ 00:10:32.951 END TEST nvme_multi_aen 00:10:32.951 ************************************ 00:10:32.951 14:05:35 -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:32.951 14:05:35 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:10:32.951 14:05:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:32.951 14:05:35 -- common/autotest_common.sh@10 -- # set +x 00:10:32.951 ************************************ 00:10:32.951 START TEST nvme_startup 00:10:32.951 ************************************ 00:10:32.951 14:05:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:33.208 Initializing NVMe Controllers 00:10:33.208 Attached to 0000:00:06.0 00:10:33.208 Attached to 0000:00:07.0 00:10:33.208 Attached to 0000:00:09.0 00:10:33.208 Attached to 0000:00:08.0 00:10:33.208 Initialization complete. 00:10:33.208 Time used:150323.672 (us). 00:10:33.208 00:10:33.208 real 0m0.207s 00:10:33.208 user 0m0.056s 00:10:33.208 sys 0m0.102s 00:10:33.208 ************************************ 00:10:33.208 END TEST nvme_startup 00:10:33.208 ************************************ 00:10:33.208 14:05:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:33.208 14:05:36 -- common/autotest_common.sh@10 -- # set +x 00:10:33.208 14:05:36 -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:10:33.208 14:05:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:33.208 14:05:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:33.208 14:05:36 -- common/autotest_common.sh@10 -- # set +x 00:10:33.208 ************************************ 00:10:33.208 START TEST nvme_multi_secondary 00:10:33.208 ************************************ 00:10:33.208 14:05:36 -- common/autotest_common.sh@1114 -- # nvme_multi_secondary 00:10:33.208 14:05:36 -- nvme/nvme.sh@52 -- # pid0=64467 00:10:33.208 14:05:36 -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:10:33.208 14:05:36 -- nvme/nvme.sh@54 -- # pid1=64468 00:10:33.208 14:05:36 -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:33.208 14:05:36 -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:10:36.488 Initializing NVMe Controllers 00:10:36.488 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:36.488 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:36.488 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:36.488 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:36.488 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:10:36.488 Associating PCIE (0000:00:07.0) NSID 1 with lcore 1 00:10:36.488 Associating PCIE (0000:00:09.0) NSID 1 with lcore 1 00:10:36.488 Associating PCIE (0000:00:08.0) NSID 1 with lcore 1 00:10:36.488 Associating PCIE (0000:00:08.0) NSID 2 with lcore 1 00:10:36.488 Associating PCIE (0000:00:08.0) NSID 3 with lcore 1 00:10:36.488 Initialization complete. Launching workers. 00:10:36.488 ======================================================== 00:10:36.488 Latency(us) 00:10:36.488 Device Information : IOPS MiB/s Average min max 00:10:36.488 PCIE (0000:00:06.0) NSID 1 from core 1: 7895.71 30.84 2025.06 715.20 7339.21 00:10:36.488 PCIE (0000:00:07.0) NSID 1 from core 1: 7895.71 30.84 2026.12 738.75 7462.43 00:10:36.488 PCIE (0000:00:09.0) NSID 1 from core 1: 7895.71 30.84 2026.10 728.05 6043.84 00:10:36.488 PCIE (0000:00:08.0) NSID 1 from core 1: 7895.71 30.84 2026.12 727.61 6855.52 00:10:36.488 PCIE (0000:00:08.0) NSID 2 from core 1: 7895.71 30.84 2026.13 737.74 6782.36 00:10:36.488 PCIE (0000:00:08.0) NSID 3 from core 1: 7895.71 30.84 2026.15 734.79 6584.73 00:10:36.488 ======================================================== 00:10:36.488 Total : 47374.25 185.06 2025.95 715.20 7462.43 00:10:36.488 00:10:36.751 Initializing NVMe Controllers 00:10:36.751 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:36.751 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:36.751 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:36.751 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:36.751 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:10:36.751 Associating PCIE (0000:00:07.0) NSID 1 with lcore 2 00:10:36.751 Associating PCIE (0000:00:09.0) NSID 1 with lcore 2 00:10:36.751 Associating PCIE (0000:00:08.0) NSID 1 with lcore 2 00:10:36.751 Associating PCIE (0000:00:08.0) NSID 2 with lcore 2 00:10:36.751 Associating PCIE (0000:00:08.0) NSID 3 with lcore 2 00:10:36.751 Initialization complete. Launching workers. 00:10:36.751 ======================================================== 00:10:36.751 Latency(us) 00:10:36.751 Device Information : IOPS MiB/s Average min max 00:10:36.751 PCIE (0000:00:06.0) NSID 1 from core 2: 2881.86 11.26 5550.28 920.48 14857.31 00:10:36.751 PCIE (0000:00:07.0) NSID 1 from core 2: 2881.86 11.26 5551.47 745.07 16509.40 00:10:36.751 PCIE (0000:00:09.0) NSID 1 from core 2: 2881.86 11.26 5551.74 965.11 18408.27 00:10:36.751 PCIE (0000:00:08.0) NSID 1 from core 2: 2881.86 11.26 5551.25 963.96 15559.06 00:10:36.751 PCIE (0000:00:08.0) NSID 2 from core 2: 2881.86 11.26 5551.27 976.35 19384.64 00:10:36.751 PCIE (0000:00:08.0) NSID 3 from core 2: 2881.86 11.26 5551.66 905.78 15328.18 00:10:36.751 ======================================================== 00:10:36.751 Total : 17291.16 67.54 5551.28 745.07 19384.64 00:10:36.751 00:10:36.751 14:05:39 -- nvme/nvme.sh@56 -- # wait 64467 00:10:38.651 Initializing NVMe Controllers 00:10:38.651 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:38.651 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:38.651 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:38.651 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:38.651 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:10:38.651 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:10:38.651 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:10:38.651 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:10:38.651 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:10:38.651 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:10:38.651 Initialization complete. Launching workers. 00:10:38.651 ======================================================== 00:10:38.651 Latency(us) 00:10:38.651 Device Information : IOPS MiB/s Average min max 00:10:38.651 PCIE (0000:00:06.0) NSID 1 from core 0: 10597.43 41.40 1508.66 722.95 8988.94 00:10:38.651 PCIE (0000:00:07.0) NSID 1 from core 0: 10597.43 41.40 1509.43 721.94 9272.74 00:10:38.651 PCIE (0000:00:09.0) NSID 1 from core 0: 10597.43 41.40 1509.41 668.62 8668.74 00:10:38.651 PCIE (0000:00:08.0) NSID 1 from core 0: 10597.43 41.40 1509.40 658.56 8935.35 00:10:38.651 PCIE (0000:00:08.0) NSID 2 from core 0: 10597.43 41.40 1509.38 634.16 8138.25 00:10:38.651 PCIE (0000:00:08.0) NSID 3 from core 0: 10597.43 41.40 1509.36 611.32 8969.41 00:10:38.651 ======================================================== 00:10:38.651 Total : 63584.59 248.38 1509.27 611.32 9272.74 00:10:38.651 00:10:38.651 14:05:41 -- nvme/nvme.sh@57 -- # wait 64468 00:10:38.651 14:05:41 -- nvme/nvme.sh@61 -- # pid0=64537 00:10:38.651 14:05:41 -- nvme/nvme.sh@63 -- # pid1=64538 00:10:38.651 14:05:41 -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:10:38.651 14:05:41 -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:10:38.651 14:05:41 -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:41.934 Initializing NVMe Controllers 00:10:41.934 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:41.934 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:41.934 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:41.934 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:41.934 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:10:41.934 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:10:41.934 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:10:41.934 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:10:41.934 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:10:41.934 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:10:41.934 Initialization complete. Launching workers. 00:10:41.934 ======================================================== 00:10:41.934 Latency(us) 00:10:41.934 Device Information : IOPS MiB/s Average min max 00:10:41.934 PCIE (0000:00:06.0) NSID 1 from core 0: 7682.17 30.01 2081.52 724.52 8120.55 00:10:41.934 PCIE (0000:00:07.0) NSID 1 from core 0: 7682.17 30.01 2082.50 742.52 7549.90 00:10:41.934 PCIE (0000:00:09.0) NSID 1 from core 0: 7682.17 30.01 2082.48 749.57 7656.02 00:10:41.934 PCIE (0000:00:08.0) NSID 1 from core 0: 7682.17 30.01 2082.45 746.48 6672.12 00:10:41.934 PCIE (0000:00:08.0) NSID 2 from core 0: 7682.17 30.01 2082.87 736.78 8048.30 00:10:41.934 PCIE (0000:00:08.0) NSID 3 from core 0: 7682.17 30.01 2083.48 735.35 7054.44 00:10:41.934 ======================================================== 00:10:41.934 Total : 46093.02 180.05 2082.55 724.52 8120.55 00:10:41.934 00:10:42.193 Initializing NVMe Controllers 00:10:42.193 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:42.193 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:42.193 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:42.193 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:42.193 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:10:42.193 Associating PCIE (0000:00:07.0) NSID 1 with lcore 1 00:10:42.193 Associating PCIE (0000:00:09.0) NSID 1 with lcore 1 00:10:42.193 Associating PCIE (0000:00:08.0) NSID 1 with lcore 1 00:10:42.193 Associating PCIE (0000:00:08.0) NSID 2 with lcore 1 00:10:42.193 Associating PCIE (0000:00:08.0) NSID 3 with lcore 1 00:10:42.193 Initialization complete. Launching workers. 00:10:42.193 ======================================================== 00:10:42.193 Latency(us) 00:10:42.193 Device Information : IOPS MiB/s Average min max 00:10:42.193 PCIE (0000:00:06.0) NSID 1 from core 1: 7656.56 29.91 2088.37 702.82 8226.70 00:10:42.193 PCIE (0000:00:07.0) NSID 1 from core 1: 7656.56 29.91 2089.23 725.15 8175.17 00:10:42.193 PCIE (0000:00:09.0) NSID 1 from core 1: 7656.56 29.91 2089.18 731.09 8266.42 00:10:42.193 PCIE (0000:00:08.0) NSID 1 from core 1: 7656.56 29.91 2089.13 731.93 6483.18 00:10:42.193 PCIE (0000:00:08.0) NSID 2 from core 1: 7656.56 29.91 2089.07 714.00 6516.01 00:10:42.193 PCIE (0000:00:08.0) NSID 3 from core 1: 7656.56 29.91 2089.02 659.52 7578.58 00:10:42.193 ======================================================== 00:10:42.193 Total : 45939.38 179.45 2089.00 659.52 8266.42 00:10:42.193 00:10:44.102 Initializing NVMe Controllers 00:10:44.102 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:44.102 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:44.102 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:44.102 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:44.102 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:10:44.102 Associating PCIE (0000:00:07.0) NSID 1 with lcore 2 00:10:44.102 Associating PCIE (0000:00:09.0) NSID 1 with lcore 2 00:10:44.102 Associating PCIE (0000:00:08.0) NSID 1 with lcore 2 00:10:44.102 Associating PCIE (0000:00:08.0) NSID 2 with lcore 2 00:10:44.102 Associating PCIE (0000:00:08.0) NSID 3 with lcore 2 00:10:44.102 Initialization complete. Launching workers. 00:10:44.102 ======================================================== 00:10:44.102 Latency(us) 00:10:44.102 Device Information : IOPS MiB/s Average min max 00:10:44.102 PCIE (0000:00:06.0) NSID 1 from core 2: 3113.52 12.16 5137.51 790.80 18865.97 00:10:44.102 PCIE (0000:00:07.0) NSID 1 from core 2: 3113.52 12.16 5138.59 780.95 18773.98 00:10:44.102 PCIE (0000:00:09.0) NSID 1 from core 2: 3113.52 12.16 5138.82 796.30 19597.14 00:10:44.102 PCIE (0000:00:08.0) NSID 1 from core 2: 3113.52 12.16 5138.77 772.47 18971.26 00:10:44.102 PCIE (0000:00:08.0) NSID 2 from core 2: 3113.52 12.16 5138.20 801.94 18041.38 00:10:44.102 PCIE (0000:00:08.0) NSID 3 from core 2: 3113.52 12.16 5138.16 809.23 18856.32 00:10:44.102 ======================================================== 00:10:44.102 Total : 18681.12 72.97 5138.34 772.47 19597.14 00:10:44.102 00:10:44.102 ************************************ 00:10:44.102 END TEST nvme_multi_secondary 00:10:44.102 ************************************ 00:10:44.102 14:05:47 -- nvme/nvme.sh@65 -- # wait 64537 00:10:44.102 14:05:47 -- nvme/nvme.sh@66 -- # wait 64538 00:10:44.102 00:10:44.102 real 0m10.930s 00:10:44.102 user 0m18.670s 00:10:44.102 sys 0m0.677s 00:10:44.102 14:05:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:44.102 14:05:47 -- common/autotest_common.sh@10 -- # set +x 00:10:44.361 14:05:47 -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:10:44.361 14:05:47 -- nvme/nvme.sh@102 -- # kill_stub 00:10:44.361 14:05:47 -- common/autotest_common.sh@1075 -- # [[ -e /proc/63472 ]] 00:10:44.361 14:05:47 -- common/autotest_common.sh@1076 -- # kill 63472 00:10:44.361 14:05:47 -- common/autotest_common.sh@1077 -- # wait 63472 00:10:44.932 [2024-12-08 14:05:47.612246] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64410) is not found. Dropping the request. 00:10:44.932 [2024-12-08 14:05:47.612332] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64410) is not found. Dropping the request. 00:10:44.932 [2024-12-08 14:05:47.612345] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64410) is not found. Dropping the request. 00:10:44.932 [2024-12-08 14:05:47.612359] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64410) is not found. Dropping the request. 00:10:45.504 [2024-12-08 14:05:48.127690] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64410) is not found. Dropping the request. 00:10:45.504 [2024-12-08 14:05:48.127940] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64410) is not found. Dropping the request. 00:10:45.504 [2024-12-08 14:05:48.127960] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64410) is not found. Dropping the request. 00:10:45.504 [2024-12-08 14:05:48.127973] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64410) is not found. Dropping the request. 00:10:46.444 [2024-12-08 14:05:49.140268] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64410) is not found. Dropping the request. 00:10:46.444 [2024-12-08 14:05:49.140517] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64410) is not found. Dropping the request. 00:10:46.444 [2024-12-08 14:05:49.140536] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64410) is not found. Dropping the request. 00:10:46.444 [2024-12-08 14:05:49.140550] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64410) is not found. Dropping the request. 00:10:47.830 [2024-12-08 14:05:50.644550] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64410) is not found. Dropping the request. 00:10:47.830 [2024-12-08 14:05:50.644640] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64410) is not found. Dropping the request. 00:10:47.830 [2024-12-08 14:05:50.644654] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64410) is not found. Dropping the request. 00:10:47.830 [2024-12-08 14:05:50.644672] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64410) is not found. Dropping the request. 00:10:48.091 14:05:50 -- common/autotest_common.sh@1079 -- # rm -f /var/run/spdk_stub0 00:10:48.091 14:05:50 -- common/autotest_common.sh@1083 -- # echo 2 00:10:48.091 14:05:50 -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:48.091 14:05:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:48.091 14:05:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:48.091 14:05:50 -- common/autotest_common.sh@10 -- # set +x 00:10:48.091 ************************************ 00:10:48.091 START TEST bdev_nvme_reset_stuck_adm_cmd 00:10:48.091 ************************************ 00:10:48.091 14:05:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:48.091 * Looking for test storage... 00:10:48.091 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:48.091 14:05:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:48.091 14:05:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:48.091 14:05:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:48.352 14:05:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:48.352 14:05:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:48.352 14:05:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:48.352 14:05:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:48.352 14:05:51 -- scripts/common.sh@335 -- # IFS=.-: 00:10:48.352 14:05:51 -- scripts/common.sh@335 -- # read -ra ver1 00:10:48.352 14:05:51 -- scripts/common.sh@336 -- # IFS=.-: 00:10:48.352 14:05:51 -- scripts/common.sh@336 -- # read -ra ver2 00:10:48.352 14:05:51 -- scripts/common.sh@337 -- # local 'op=<' 00:10:48.352 14:05:51 -- scripts/common.sh@339 -- # ver1_l=2 00:10:48.352 14:05:51 -- scripts/common.sh@340 -- # ver2_l=1 00:10:48.352 14:05:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:48.352 14:05:51 -- scripts/common.sh@343 -- # case "$op" in 00:10:48.352 14:05:51 -- scripts/common.sh@344 -- # : 1 00:10:48.352 14:05:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:48.352 14:05:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:48.352 14:05:51 -- scripts/common.sh@364 -- # decimal 1 00:10:48.352 14:05:51 -- scripts/common.sh@352 -- # local d=1 00:10:48.352 14:05:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:48.352 14:05:51 -- scripts/common.sh@354 -- # echo 1 00:10:48.352 14:05:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:48.352 14:05:51 -- scripts/common.sh@365 -- # decimal 2 00:10:48.352 14:05:51 -- scripts/common.sh@352 -- # local d=2 00:10:48.352 14:05:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:48.352 14:05:51 -- scripts/common.sh@354 -- # echo 2 00:10:48.352 14:05:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:48.352 14:05:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:48.352 14:05:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:48.352 14:05:51 -- scripts/common.sh@367 -- # return 0 00:10:48.352 14:05:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:48.352 14:05:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:48.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.352 --rc genhtml_branch_coverage=1 00:10:48.352 --rc genhtml_function_coverage=1 00:10:48.352 --rc genhtml_legend=1 00:10:48.352 --rc geninfo_all_blocks=1 00:10:48.352 --rc geninfo_unexecuted_blocks=1 00:10:48.352 00:10:48.352 ' 00:10:48.352 14:05:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:48.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.352 --rc genhtml_branch_coverage=1 00:10:48.352 --rc genhtml_function_coverage=1 00:10:48.352 --rc genhtml_legend=1 00:10:48.352 --rc geninfo_all_blocks=1 00:10:48.352 --rc geninfo_unexecuted_blocks=1 00:10:48.352 00:10:48.352 ' 00:10:48.352 14:05:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:48.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.352 --rc genhtml_branch_coverage=1 00:10:48.352 --rc genhtml_function_coverage=1 00:10:48.352 --rc genhtml_legend=1 00:10:48.352 --rc geninfo_all_blocks=1 00:10:48.352 --rc geninfo_unexecuted_blocks=1 00:10:48.352 00:10:48.352 ' 00:10:48.352 14:05:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:48.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.352 --rc genhtml_branch_coverage=1 00:10:48.352 --rc genhtml_function_coverage=1 00:10:48.352 --rc genhtml_legend=1 00:10:48.352 --rc geninfo_all_blocks=1 00:10:48.352 --rc geninfo_unexecuted_blocks=1 00:10:48.352 00:10:48.352 ' 00:10:48.352 14:05:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:10:48.352 14:05:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:10:48.352 14:05:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:10:48.352 14:05:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:10:48.352 14:05:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:10:48.352 14:05:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:10:48.352 14:05:51 -- common/autotest_common.sh@1519 -- # bdfs=() 00:10:48.352 14:05:51 -- common/autotest_common.sh@1519 -- # local bdfs 00:10:48.352 14:05:51 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:10:48.352 14:05:51 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:10:48.352 14:05:51 -- common/autotest_common.sh@1508 -- # bdfs=() 00:10:48.352 14:05:51 -- common/autotest_common.sh@1508 -- # local bdfs 00:10:48.352 14:05:51 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:48.352 14:05:51 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:48.352 14:05:51 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:10:48.352 14:05:51 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:10:48.352 14:05:51 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:10:48.352 14:05:51 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:10:48.352 14:05:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:06.0 00:10:48.352 14:05:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:06.0 ']' 00:10:48.352 14:05:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=64732 00:10:48.352 14:05:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:48.352 14:05:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:10:48.352 14:05:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 64732 00:10:48.352 14:05:51 -- common/autotest_common.sh@829 -- # '[' -z 64732 ']' 00:10:48.352 14:05:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.352 14:05:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:48.352 14:05:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.352 14:05:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:48.352 14:05:51 -- common/autotest_common.sh@10 -- # set +x 00:10:48.352 [2024-12-08 14:05:51.178385] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:48.352 [2024-12-08 14:05:51.178528] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64732 ] 00:10:48.613 [2024-12-08 14:05:51.350920] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:48.874 [2024-12-08 14:05:51.623901] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:48.874 [2024-12-08 14:05:51.624438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:48.874 [2024-12-08 14:05:51.624740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:48.874 [2024-12-08 14:05:51.625113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:48.874 [2024-12-08 14:05:51.625152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.816 14:05:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:49.816 14:05:52 -- common/autotest_common.sh@862 -- # return 0 00:10:49.816 14:05:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:06.0 00:10:49.816 14:05:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.816 14:05:52 -- common/autotest_common.sh@10 -- # set +x 00:10:50.077 nvme0n1 00:10:50.077 14:05:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.077 14:05:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:10:50.077 14:05:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_WKwC4.txt 00:10:50.077 14:05:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:10:50.077 14:05:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.077 14:05:52 -- common/autotest_common.sh@10 -- # set +x 00:10:50.077 true 00:10:50.077 14:05:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.077 14:05:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:10:50.077 14:05:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1733666752 00:10:50.077 14:05:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64764 00:10:50.077 14:05:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:50.077 14:05:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:10:50.077 14:05:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:10:51.976 14:05:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:10:51.976 14:05:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.976 14:05:54 -- common/autotest_common.sh@10 -- # set +x 00:10:51.976 [2024-12-08 14:05:54.780514] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:10:51.976 [2024-12-08 14:05:54.780736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:10:51.976 [2024-12-08 14:05:54.780757] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:51.976 [2024-12-08 14:05:54.780769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:51.976 [2024-12-08 14:05:54.782512] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:51.976 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64764 00:10:51.976 14:05:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.976 14:05:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64764 00:10:51.976 14:05:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64764 00:10:51.976 14:05:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:10:51.976 14:05:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:10:51.976 14:05:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:10:51.976 14:05:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.976 14:05:54 -- common/autotest_common.sh@10 -- # set +x 00:10:51.976 14:05:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.976 14:05:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:10:51.976 14:05:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_WKwC4.txt 00:10:51.976 14:05:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:10:51.976 14:05:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:10:51.976 14:05:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:51.976 14:05:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:51.976 14:05:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:51.976 14:05:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:51.976 14:05:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:51.976 14:05:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:51.976 14:05:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:10:51.976 14:05:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:10:51.976 14:05:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:10:51.976 14:05:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:51.976 14:05:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:51.976 14:05:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:51.976 14:05:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:51.977 14:05:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:51.977 14:05:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:51.977 14:05:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:10:51.977 14:05:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:10:51.977 14:05:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_WKwC4.txt 00:10:51.977 14:05:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 64732 00:10:51.977 14:05:54 -- common/autotest_common.sh@936 -- # '[' -z 64732 ']' 00:10:51.977 14:05:54 -- common/autotest_common.sh@940 -- # kill -0 64732 00:10:51.977 14:05:54 -- common/autotest_common.sh@941 -- # uname 00:10:51.977 14:05:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:51.977 14:05:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64732 00:10:51.977 killing process with pid 64732 00:10:51.977 14:05:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:51.977 14:05:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:51.977 14:05:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64732' 00:10:51.977 14:05:54 -- common/autotest_common.sh@955 -- # kill 64732 00:10:51.977 14:05:54 -- common/autotest_common.sh@960 -- # wait 64732 00:10:53.351 14:05:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:10:53.351 14:05:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:10:53.351 00:10:53.351 real 0m5.279s 00:10:53.351 user 0m18.158s 00:10:53.351 sys 0m0.610s 00:10:53.351 ************************************ 00:10:53.351 END TEST bdev_nvme_reset_stuck_adm_cmd 00:10:53.351 ************************************ 00:10:53.351 14:05:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:53.351 14:05:56 -- common/autotest_common.sh@10 -- # set +x 00:10:53.351 14:05:56 -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:10:53.351 14:05:56 -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:10:53.351 14:05:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:53.351 14:05:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:53.351 14:05:56 -- common/autotest_common.sh@10 -- # set +x 00:10:53.351 ************************************ 00:10:53.351 START TEST nvme_fio 00:10:53.351 ************************************ 00:10:53.351 14:05:56 -- common/autotest_common.sh@1114 -- # nvme_fio_test 00:10:53.351 14:05:56 -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:10:53.351 14:05:56 -- nvme/nvme.sh@32 -- # ran_fio=false 00:10:53.351 14:05:56 -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:10:53.351 14:05:56 -- common/autotest_common.sh@1508 -- # bdfs=() 00:10:53.351 14:05:56 -- common/autotest_common.sh@1508 -- # local bdfs 00:10:53.351 14:05:56 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:53.351 14:05:56 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:53.351 14:05:56 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:10:53.351 14:05:56 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:10:53.351 14:05:56 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:10:53.351 14:05:56 -- nvme/nvme.sh@33 -- # bdfs=('0000:00:06.0' '0000:00:07.0' '0000:00:08.0' '0000:00:09.0') 00:10:53.351 14:05:56 -- nvme/nvme.sh@33 -- # local bdfs bdf 00:10:53.351 14:05:56 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:53.351 14:05:56 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:10:53.351 14:05:56 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:53.612 14:05:56 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:10:53.612 14:05:56 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:53.873 14:05:56 -- nvme/nvme.sh@41 -- # bs=4096 00:10:53.873 14:05:56 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:10:53.873 14:05:56 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:10:53.873 14:05:56 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:10:53.873 14:05:56 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:53.873 14:05:56 -- common/autotest_common.sh@1328 -- # local sanitizers 00:10:53.873 14:05:56 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:53.873 14:05:56 -- common/autotest_common.sh@1330 -- # shift 00:10:53.873 14:05:56 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:10:53.873 14:05:56 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:10:53.873 14:05:56 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:53.873 14:05:56 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:10:53.873 14:05:56 -- common/autotest_common.sh@1334 -- # grep libasan 00:10:53.873 14:05:56 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:53.873 14:05:56 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:53.873 14:05:56 -- common/autotest_common.sh@1336 -- # break 00:10:53.873 14:05:56 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:53.873 14:05:56 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:10:54.135 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:54.135 fio-3.35 00:10:54.135 Starting 1 thread 00:10:58.343 00:10:58.343 test: (groupid=0, jobs=1): err= 0: pid=64900: Sun Dec 8 14:06:01 2024 00:10:58.343 read: IOPS=20.3k, BW=79.2MiB/s (83.1MB/s)(159MiB/2001msec) 00:10:58.343 slat (usec): min=3, max=116, avg= 5.30, stdev= 2.64 00:10:58.343 clat (usec): min=687, max=9231, avg=3117.73, stdev=1095.05 00:10:58.343 lat (usec): min=691, max=9245, avg=3123.03, stdev=1096.29 00:10:58.343 clat percentiles (usec): 00:10:58.343 | 1.00th=[ 1729], 5.00th=[ 2147], 10.00th=[ 2245], 20.00th=[ 2376], 00:10:58.343 | 30.00th=[ 2474], 40.00th=[ 2606], 50.00th=[ 2737], 60.00th=[ 2900], 00:10:58.343 | 70.00th=[ 3130], 80.00th=[ 3687], 90.00th=[ 4883], 95.00th=[ 5604], 00:10:58.343 | 99.00th=[ 6718], 99.50th=[ 7111], 99.90th=[ 7832], 99.95th=[ 8029], 00:10:58.343 | 99.99th=[ 8717] 00:10:58.343 bw ( KiB/s): min=73704, max=80912, per=96.74%, avg=78480.00, stdev=4136.37, samples=3 00:10:58.343 iops : min=18426, max=20228, avg=19620.00, stdev=1034.09, samples=3 00:10:58.343 write: IOPS=20.2k, BW=79.0MiB/s (82.9MB/s)(158MiB/2001msec); 0 zone resets 00:10:58.343 slat (nsec): min=3381, max=79942, avg=5397.02, stdev=2540.18 00:10:58.343 clat (usec): min=671, max=9597, avg=3174.60, stdev=1114.93 00:10:58.343 lat (usec): min=675, max=9601, avg=3180.00, stdev=1116.03 00:10:58.343 clat percentiles (usec): 00:10:58.343 | 1.00th=[ 1762], 5.00th=[ 2180], 10.00th=[ 2278], 20.00th=[ 2409], 00:10:58.343 | 30.00th=[ 2507], 40.00th=[ 2638], 50.00th=[ 2769], 60.00th=[ 2933], 00:10:58.343 | 70.00th=[ 3195], 80.00th=[ 3818], 90.00th=[ 4948], 95.00th=[ 5669], 00:10:58.343 | 99.00th=[ 6783], 99.50th=[ 7242], 99.90th=[ 7963], 99.95th=[ 8455], 00:10:58.343 | 99.99th=[ 8979] 00:10:58.343 bw ( KiB/s): min=73696, max=81032, per=97.08%, avg=78578.67, stdev=4228.53, samples=3 00:10:58.343 iops : min=18424, max=20258, avg=19644.67, stdev=1057.13, samples=3 00:10:58.343 lat (usec) : 750=0.01%, 1000=0.02% 00:10:58.343 lat (msec) : 2=2.05%, 4=80.49%, 10=17.42% 00:10:58.343 cpu : usr=99.05%, sys=0.10%, ctx=5, majf=0, minf=608 00:10:58.343 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:58.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.343 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:58.343 issued rwts: total=40582,40490,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.343 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:58.343 00:10:58.343 Run status group 0 (all jobs): 00:10:58.343 READ: bw=79.2MiB/s (83.1MB/s), 79.2MiB/s-79.2MiB/s (83.1MB/s-83.1MB/s), io=159MiB (166MB), run=2001-2001msec 00:10:58.343 WRITE: bw=79.0MiB/s (82.9MB/s), 79.0MiB/s-79.0MiB/s (82.9MB/s-82.9MB/s), io=158MiB (166MB), run=2001-2001msec 00:10:58.343 ----------------------------------------------------- 00:10:58.343 Suppressions used: 00:10:58.343 count bytes template 00:10:58.343 1 32 /usr/src/fio/parse.c 00:10:58.343 1 8 libtcmalloc_minimal.so 00:10:58.343 ----------------------------------------------------- 00:10:58.343 00:10:58.343 14:06:01 -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:58.343 14:06:01 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:58.343 14:06:01 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' 00:10:58.343 14:06:01 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:58.604 14:06:01 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' 00:10:58.604 14:06:01 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:58.865 14:06:01 -- nvme/nvme.sh@41 -- # bs=4096 00:10:58.865 14:06:01 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:10:58.865 14:06:01 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:10:58.865 14:06:01 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:10:58.865 14:06:01 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:58.865 14:06:01 -- common/autotest_common.sh@1328 -- # local sanitizers 00:10:58.865 14:06:01 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:58.865 14:06:01 -- common/autotest_common.sh@1330 -- # shift 00:10:58.865 14:06:01 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:10:58.865 14:06:01 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:10:58.865 14:06:01 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:10:58.865 14:06:01 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:58.865 14:06:01 -- common/autotest_common.sh@1334 -- # grep libasan 00:10:58.865 14:06:01 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:58.865 14:06:01 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:58.865 14:06:01 -- common/autotest_common.sh@1336 -- # break 00:10:58.865 14:06:01 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:58.865 14:06:01 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:10:59.140 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:59.140 fio-3.35 00:10:59.140 Starting 1 thread 00:11:05.731 00:11:05.731 test: (groupid=0, jobs=1): err= 0: pid=64975: Sun Dec 8 14:06:07 2024 00:11:05.731 read: IOPS=18.3k, BW=71.5MiB/s (75.0MB/s)(143MiB/2001msec) 00:11:05.731 slat (nsec): min=4009, max=95002, avg=6470.40, stdev=3212.60 00:11:05.731 clat (usec): min=734, max=11302, avg=3468.90, stdev=1195.88 00:11:05.731 lat (usec): min=742, max=11397, avg=3475.37, stdev=1197.60 00:11:05.731 clat percentiles (usec): 00:11:05.731 | 1.00th=[ 2343], 5.00th=[ 2507], 10.00th=[ 2573], 20.00th=[ 2704], 00:11:05.731 | 30.00th=[ 2769], 40.00th=[ 2868], 50.00th=[ 2999], 60.00th=[ 3130], 00:11:05.731 | 70.00th=[ 3425], 80.00th=[ 4047], 90.00th=[ 5342], 95.00th=[ 6259], 00:11:05.731 | 99.00th=[ 7308], 99.50th=[ 7832], 99.90th=[ 9765], 99.95th=[10028], 00:11:05.731 | 99.99th=[11207] 00:11:05.731 bw ( KiB/s): min=72496, max=80712, per=100.00%, avg=76280.00, stdev=4146.15, samples=3 00:11:05.731 iops : min=18124, max=20178, avg=19070.00, stdev=1036.54, samples=3 00:11:05.731 write: IOPS=18.3k, BW=71.5MiB/s (75.0MB/s)(143MiB/2001msec); 0 zone resets 00:11:05.731 slat (nsec): min=4145, max=83455, avg=6821.92, stdev=3142.68 00:11:05.731 clat (usec): min=722, max=11199, avg=3494.41, stdev=1191.92 00:11:05.731 lat (usec): min=730, max=11227, avg=3501.23, stdev=1193.61 00:11:05.731 clat percentiles (usec): 00:11:05.731 | 1.00th=[ 2376], 5.00th=[ 2540], 10.00th=[ 2606], 20.00th=[ 2737], 00:11:05.731 | 30.00th=[ 2802], 40.00th=[ 2900], 50.00th=[ 2999], 60.00th=[ 3163], 00:11:05.731 | 70.00th=[ 3458], 80.00th=[ 4080], 90.00th=[ 5342], 95.00th=[ 6259], 00:11:05.731 | 99.00th=[ 7373], 99.50th=[ 7898], 99.90th=[ 9896], 99.95th=[10290], 00:11:05.731 | 99.99th=[11076] 00:11:05.731 bw ( KiB/s): min=72800, max=80584, per=100.00%, avg=76421.33, stdev=3920.13, samples=3 00:11:05.731 iops : min=18200, max=20146, avg=19105.33, stdev=980.03, samples=3 00:11:05.731 lat (usec) : 750=0.01%, 1000=0.02% 00:11:05.731 lat (msec) : 2=0.16%, 4=79.12%, 10=20.63%, 20=0.08% 00:11:05.731 cpu : usr=99.05%, sys=0.00%, ctx=3, majf=0, minf=609 00:11:05.731 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:05.731 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.731 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:05.731 issued rwts: total=36627,36617,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:05.731 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:05.731 00:11:05.731 Run status group 0 (all jobs): 00:11:05.731 READ: bw=71.5MiB/s (75.0MB/s), 71.5MiB/s-71.5MiB/s (75.0MB/s-75.0MB/s), io=143MiB (150MB), run=2001-2001msec 00:11:05.731 WRITE: bw=71.5MiB/s (75.0MB/s), 71.5MiB/s-71.5MiB/s (75.0MB/s-75.0MB/s), io=143MiB (150MB), run=2001-2001msec 00:11:05.731 ----------------------------------------------------- 00:11:05.731 Suppressions used: 00:11:05.731 count bytes template 00:11:05.731 1 32 /usr/src/fio/parse.c 00:11:05.731 1 8 libtcmalloc_minimal.so 00:11:05.731 ----------------------------------------------------- 00:11:05.731 00:11:05.731 14:06:07 -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:05.732 14:06:07 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:05.732 14:06:07 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' 00:11:05.732 14:06:07 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:05.732 14:06:07 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' 00:11:05.732 14:06:07 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:05.732 14:06:08 -- nvme/nvme.sh@41 -- # bs=4096 00:11:05.732 14:06:08 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:11:05.732 14:06:08 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:11:05.732 14:06:08 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:11:05.732 14:06:08 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:05.732 14:06:08 -- common/autotest_common.sh@1328 -- # local sanitizers 00:11:05.732 14:06:08 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:05.732 14:06:08 -- common/autotest_common.sh@1330 -- # shift 00:11:05.732 14:06:08 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:11:05.732 14:06:08 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:11:05.732 14:06:08 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:05.732 14:06:08 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:11:05.732 14:06:08 -- common/autotest_common.sh@1334 -- # grep libasan 00:11:05.732 14:06:08 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:05.732 14:06:08 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:05.732 14:06:08 -- common/autotest_common.sh@1336 -- # break 00:11:05.732 14:06:08 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:05.732 14:06:08 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:11:05.732 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:05.732 fio-3.35 00:11:05.732 Starting 1 thread 00:11:11.026 00:11:11.026 test: (groupid=0, jobs=1): err= 0: pid=65063: Sun Dec 8 14:06:13 2024 00:11:11.026 read: IOPS=14.9k, BW=58.1MiB/s (60.9MB/s)(116MiB/2001msec) 00:11:11.026 slat (usec): min=4, max=116, avg= 7.64, stdev= 4.08 00:11:11.026 clat (usec): min=473, max=13941, avg=4266.79, stdev=1387.09 00:11:11.026 lat (usec): min=479, max=14057, avg=4274.43, stdev=1388.78 00:11:11.026 clat percentiles (usec): 00:11:11.026 | 1.00th=[ 2573], 5.00th=[ 2802], 10.00th=[ 2933], 20.00th=[ 3163], 00:11:11.026 | 30.00th=[ 3359], 40.00th=[ 3556], 50.00th=[ 3785], 60.00th=[ 4080], 00:11:11.026 | 70.00th=[ 4686], 80.00th=[ 5407], 90.00th=[ 6325], 95.00th=[ 7111], 00:11:11.026 | 99.00th=[ 8356], 99.50th=[ 8979], 99.90th=[10945], 99.95th=[11863], 00:11:11.026 | 99.99th=[13829] 00:11:11.026 bw ( KiB/s): min=52080, max=73776, per=99.86%, avg=59424.00, stdev=12430.33, samples=3 00:11:11.026 iops : min=13020, max=18444, avg=14856.00, stdev=3107.58, samples=3 00:11:11.026 write: IOPS=14.9k, BW=58.1MiB/s (61.0MB/s)(116MiB/2001msec); 0 zone resets 00:11:11.026 slat (nsec): min=5256, max=93212, avg=8124.63, stdev=3952.38 00:11:11.026 clat (usec): min=492, max=13687, avg=4298.64, stdev=1382.25 00:11:11.026 lat (usec): min=499, max=13704, avg=4306.76, stdev=1383.90 00:11:11.026 clat percentiles (usec): 00:11:11.026 | 1.00th=[ 2606], 5.00th=[ 2835], 10.00th=[ 2999], 20.00th=[ 3195], 00:11:11.026 | 30.00th=[ 3392], 40.00th=[ 3589], 50.00th=[ 3818], 60.00th=[ 4113], 00:11:11.026 | 70.00th=[ 4686], 80.00th=[ 5407], 90.00th=[ 6390], 95.00th=[ 7111], 00:11:11.026 | 99.00th=[ 8455], 99.50th=[ 9110], 99.90th=[10945], 99.95th=[11994], 00:11:11.026 | 99.99th=[13435] 00:11:11.026 bw ( KiB/s): min=51904, max=73136, per=99.51%, avg=59242.67, stdev=12038.36, samples=3 00:11:11.026 iops : min=12976, max=18284, avg=14810.67, stdev=3009.59, samples=3 00:11:11.026 lat (usec) : 500=0.01%, 750=0.04%, 1000=0.01% 00:11:11.026 lat (msec) : 2=0.08%, 4=56.93%, 10=42.74%, 20=0.20% 00:11:11.026 cpu : usr=98.75%, sys=0.00%, ctx=3, majf=0, minf=608 00:11:11.026 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:11.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.026 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:11.026 issued rwts: total=29770,29783,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:11.026 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:11.026 00:11:11.026 Run status group 0 (all jobs): 00:11:11.026 READ: bw=58.1MiB/s (60.9MB/s), 58.1MiB/s-58.1MiB/s (60.9MB/s-60.9MB/s), io=116MiB (122MB), run=2001-2001msec 00:11:11.026 WRITE: bw=58.1MiB/s (61.0MB/s), 58.1MiB/s-58.1MiB/s (61.0MB/s-61.0MB/s), io=116MiB (122MB), run=2001-2001msec 00:11:11.026 ----------------------------------------------------- 00:11:11.026 Suppressions used: 00:11:11.026 count bytes template 00:11:11.026 1 32 /usr/src/fio/parse.c 00:11:11.026 1 8 libtcmalloc_minimal.so 00:11:11.026 ----------------------------------------------------- 00:11:11.026 00:11:11.026 14:06:13 -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:11.026 14:06:13 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:11.026 14:06:13 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' 00:11:11.026 14:06:13 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:11.026 14:06:13 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' 00:11:11.026 14:06:13 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:11.287 14:06:13 -- nvme/nvme.sh@41 -- # bs=4096 00:11:11.287 14:06:13 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:11:11.287 14:06:13 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:11:11.287 14:06:13 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:11:11.287 14:06:13 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:11.287 14:06:13 -- common/autotest_common.sh@1328 -- # local sanitizers 00:11:11.287 14:06:13 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:11.287 14:06:13 -- common/autotest_common.sh@1330 -- # shift 00:11:11.287 14:06:13 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:11:11.287 14:06:13 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:11:11.287 14:06:13 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:11.287 14:06:13 -- common/autotest_common.sh@1334 -- # grep libasan 00:11:11.287 14:06:13 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:11:11.287 14:06:13 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:11.287 14:06:13 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:11.287 14:06:13 -- common/autotest_common.sh@1336 -- # break 00:11:11.287 14:06:13 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:11.287 14:06:13 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:11:11.287 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:11.287 fio-3.35 00:11:11.287 Starting 1 thread 00:11:17.879 00:11:17.879 test: (groupid=0, jobs=1): err= 0: pid=65134: Sun Dec 8 14:06:20 2024 00:11:17.879 read: IOPS=14.2k, BW=55.4MiB/s (58.0MB/s)(111MiB/2001msec) 00:11:17.879 slat (nsec): min=6051, max=70011, avg=7919.22, stdev=3660.39 00:11:17.879 clat (usec): min=860, max=10077, avg=4488.81, stdev=1286.88 00:11:17.879 lat (usec): min=867, max=10112, avg=4496.72, stdev=1288.09 00:11:17.879 clat percentiles (usec): 00:11:17.879 | 1.00th=[ 2933], 5.00th=[ 3195], 10.00th=[ 3294], 20.00th=[ 3458], 00:11:17.879 | 30.00th=[ 3589], 40.00th=[ 3752], 50.00th=[ 3916], 60.00th=[ 4293], 00:11:17.879 | 70.00th=[ 5014], 80.00th=[ 5669], 90.00th=[ 6456], 95.00th=[ 7177], 00:11:17.879 | 99.00th=[ 7898], 99.50th=[ 8160], 99.90th=[ 8717], 99.95th=[ 8979], 00:11:17.879 | 99.99th=[ 9896] 00:11:17.879 bw ( KiB/s): min=50408, max=60536, per=97.41%, avg=55216.00, stdev=5083.38, samples=3 00:11:17.879 iops : min=12602, max=15134, avg=13804.00, stdev=1270.84, samples=3 00:11:17.879 write: IOPS=14.2k, BW=55.4MiB/s (58.1MB/s)(111MiB/2001msec); 0 zone resets 00:11:17.879 slat (usec): min=6, max=141, avg= 8.56, stdev= 3.87 00:11:17.879 clat (usec): min=905, max=9934, avg=4513.89, stdev=1283.68 00:11:17.879 lat (usec): min=912, max=9943, avg=4522.45, stdev=1284.87 00:11:17.879 clat percentiles (usec): 00:11:17.879 | 1.00th=[ 2966], 5.00th=[ 3228], 10.00th=[ 3326], 20.00th=[ 3490], 00:11:17.879 | 30.00th=[ 3621], 40.00th=[ 3752], 50.00th=[ 3949], 60.00th=[ 4359], 00:11:17.879 | 70.00th=[ 5014], 80.00th=[ 5735], 90.00th=[ 6521], 95.00th=[ 7177], 00:11:17.879 | 99.00th=[ 7963], 99.50th=[ 8225], 99.90th=[ 8717], 99.95th=[ 9110], 00:11:17.879 | 99.99th=[ 9765] 00:11:17.879 bw ( KiB/s): min=50792, max=60232, per=97.48%, avg=55266.67, stdev=4739.09, samples=3 00:11:17.879 iops : min=12698, max=15058, avg=13816.67, stdev=1184.77, samples=3 00:11:17.879 lat (usec) : 1000=0.02% 00:11:17.879 lat (msec) : 2=0.08%, 4=52.51%, 10=47.40%, 20=0.01% 00:11:17.879 cpu : usr=98.55%, sys=0.20%, ctx=5, majf=0, minf=606 00:11:17.879 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:17.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.879 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:17.879 issued rwts: total=28355,28362,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:17.879 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:17.879 00:11:17.879 Run status group 0 (all jobs): 00:11:17.879 READ: bw=55.4MiB/s (58.0MB/s), 55.4MiB/s-55.4MiB/s (58.0MB/s-58.0MB/s), io=111MiB (116MB), run=2001-2001msec 00:11:17.879 WRITE: bw=55.4MiB/s (58.1MB/s), 55.4MiB/s-55.4MiB/s (58.1MB/s-58.1MB/s), io=111MiB (116MB), run=2001-2001msec 00:11:17.879 ----------------------------------------------------- 00:11:17.879 Suppressions used: 00:11:17.879 count bytes template 00:11:17.879 1 32 /usr/src/fio/parse.c 00:11:17.879 1 8 libtcmalloc_minimal.so 00:11:17.879 ----------------------------------------------------- 00:11:17.879 00:11:17.879 14:06:20 -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:17.879 14:06:20 -- nvme/nvme.sh@46 -- # true 00:11:17.879 00:11:17.879 real 0m24.595s 00:11:17.879 user 0m15.753s 00:11:17.879 sys 0m15.154s 00:11:17.879 14:06:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:17.879 14:06:20 -- common/autotest_common.sh@10 -- # set +x 00:11:17.879 ************************************ 00:11:17.879 END TEST nvme_fio 00:11:17.879 ************************************ 00:11:18.139 00:11:18.139 real 1m40.042s 00:11:18.139 user 3m41.303s 00:11:18.139 sys 0m26.610s 00:11:18.139 14:06:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:18.139 14:06:20 -- common/autotest_common.sh@10 -- # set +x 00:11:18.139 ************************************ 00:11:18.139 END TEST nvme 00:11:18.139 ************************************ 00:11:18.139 14:06:20 -- spdk/autotest.sh@210 -- # [[ 0 -eq 1 ]] 00:11:18.139 14:06:20 -- spdk/autotest.sh@214 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:18.139 14:06:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:18.139 14:06:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:18.139 14:06:20 -- common/autotest_common.sh@10 -- # set +x 00:11:18.139 ************************************ 00:11:18.139 START TEST nvme_scc 00:11:18.139 ************************************ 00:11:18.139 14:06:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:18.139 * Looking for test storage... 00:11:18.139 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:18.139 14:06:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:18.139 14:06:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:18.139 14:06:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:18.139 14:06:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:18.139 14:06:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:18.139 14:06:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:18.139 14:06:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:18.139 14:06:21 -- scripts/common.sh@335 -- # IFS=.-: 00:11:18.139 14:06:21 -- scripts/common.sh@335 -- # read -ra ver1 00:11:18.139 14:06:21 -- scripts/common.sh@336 -- # IFS=.-: 00:11:18.139 14:06:21 -- scripts/common.sh@336 -- # read -ra ver2 00:11:18.139 14:06:21 -- scripts/common.sh@337 -- # local 'op=<' 00:11:18.139 14:06:21 -- scripts/common.sh@339 -- # ver1_l=2 00:11:18.139 14:06:21 -- scripts/common.sh@340 -- # ver2_l=1 00:11:18.139 14:06:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:18.139 14:06:21 -- scripts/common.sh@343 -- # case "$op" in 00:11:18.139 14:06:21 -- scripts/common.sh@344 -- # : 1 00:11:18.139 14:06:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:18.139 14:06:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:18.139 14:06:21 -- scripts/common.sh@364 -- # decimal 1 00:11:18.139 14:06:21 -- scripts/common.sh@352 -- # local d=1 00:11:18.139 14:06:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:18.139 14:06:21 -- scripts/common.sh@354 -- # echo 1 00:11:18.139 14:06:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:18.139 14:06:21 -- scripts/common.sh@365 -- # decimal 2 00:11:18.139 14:06:21 -- scripts/common.sh@352 -- # local d=2 00:11:18.140 14:06:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:18.140 14:06:21 -- scripts/common.sh@354 -- # echo 2 00:11:18.140 14:06:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:18.140 14:06:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:18.140 14:06:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:18.140 14:06:21 -- scripts/common.sh@367 -- # return 0 00:11:18.140 14:06:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:18.140 14:06:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:18.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.140 --rc genhtml_branch_coverage=1 00:11:18.140 --rc genhtml_function_coverage=1 00:11:18.140 --rc genhtml_legend=1 00:11:18.140 --rc geninfo_all_blocks=1 00:11:18.140 --rc geninfo_unexecuted_blocks=1 00:11:18.140 00:11:18.140 ' 00:11:18.140 14:06:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:18.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.140 --rc genhtml_branch_coverage=1 00:11:18.140 --rc genhtml_function_coverage=1 00:11:18.140 --rc genhtml_legend=1 00:11:18.140 --rc geninfo_all_blocks=1 00:11:18.140 --rc geninfo_unexecuted_blocks=1 00:11:18.140 00:11:18.140 ' 00:11:18.140 14:06:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:18.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.140 --rc genhtml_branch_coverage=1 00:11:18.140 --rc genhtml_function_coverage=1 00:11:18.140 --rc genhtml_legend=1 00:11:18.140 --rc geninfo_all_blocks=1 00:11:18.140 --rc geninfo_unexecuted_blocks=1 00:11:18.140 00:11:18.140 ' 00:11:18.140 14:06:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:18.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.140 --rc genhtml_branch_coverage=1 00:11:18.140 --rc genhtml_function_coverage=1 00:11:18.140 --rc genhtml_legend=1 00:11:18.140 --rc geninfo_all_blocks=1 00:11:18.140 --rc geninfo_unexecuted_blocks=1 00:11:18.140 00:11:18.140 ' 00:11:18.140 14:06:21 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:18.140 14:06:21 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:18.399 14:06:21 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:18.399 14:06:21 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:18.399 14:06:21 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:18.399 14:06:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:18.399 14:06:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:18.399 14:06:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:18.399 14:06:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.399 14:06:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.399 14:06:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.399 14:06:21 -- paths/export.sh@5 -- # export PATH 00:11:18.399 14:06:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.399 14:06:21 -- nvme/functions.sh@10 -- # ctrls=() 00:11:18.399 14:06:21 -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:18.399 14:06:21 -- nvme/functions.sh@11 -- # nvmes=() 00:11:18.399 14:06:21 -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:18.399 14:06:21 -- nvme/functions.sh@12 -- # bdfs=() 00:11:18.399 14:06:21 -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:18.399 14:06:21 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:18.399 14:06:21 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:18.399 14:06:21 -- nvme/functions.sh@14 -- # nvme_name= 00:11:18.399 14:06:21 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:18.399 14:06:21 -- nvme/nvme_scc.sh@12 -- # uname 00:11:18.399 14:06:21 -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:11:18.399 14:06:21 -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:11:18.399 14:06:21 -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:18.659 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:18.918 Waiting for block devices as requested 00:11:18.918 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:11:18.918 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:11:18.918 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:11:19.179 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:11:24.562 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:11:24.562 14:06:27 -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:11:24.562 14:06:27 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:24.562 14:06:27 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:24.562 14:06:27 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:24.562 14:06:27 -- nvme/functions.sh@49 -- # pci=0000:00:09.0 00:11:24.562 14:06:27 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:09.0 00:11:24.562 14:06:27 -- scripts/common.sh@15 -- # local i 00:11:24.562 14:06:27 -- scripts/common.sh@18 -- # [[ =~ 0000:00:09.0 ]] 00:11:24.562 14:06:27 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:24.562 14:06:27 -- scripts/common.sh@24 -- # return 0 00:11:24.562 14:06:27 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:24.562 14:06:27 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:24.562 14:06:27 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:24.562 14:06:27 -- nvme/functions.sh@18 -- # shift 00:11:24.562 14:06:27 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:24.562 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.562 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.562 14:06:27 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:24.562 14:06:27 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:24.562 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.562 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.562 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:24.562 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:24.562 14:06:27 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:24.562 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.562 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.562 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:24.562 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:24.562 14:06:27 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:24.562 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.562 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.562 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:24.562 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12343 "' 00:11:24.562 14:06:27 -- nvme/functions.sh@23 -- # nvme0[sn]='12343 ' 00:11:24.562 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.562 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.562 14:06:27 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:24.562 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:24.562 14:06:27 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:24.562 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.562 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.562 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:24.562 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.563 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.563 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.563 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0x2"' 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # nvme0[cmic]=0x2 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.563 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.563 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.563 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.563 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.563 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.563 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.563 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x88010"' 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x88010 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.563 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.563 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.563 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.563 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.563 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.563 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.563 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.563 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.563 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.563 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.563 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.563 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.563 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.563 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.563 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.563 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.563 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.563 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.563 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.563 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.563 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.563 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.563 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:24.563 14:06:27 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.563 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.564 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.564 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.564 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.564 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.564 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.564 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.564 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.564 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.564 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.564 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.564 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.564 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.564 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.564 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.564 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="1"' 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=1 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.564 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.564 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.564 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.564 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.564 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.564 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.564 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.564 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.564 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.564 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.564 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.564 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.564 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.564 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.564 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.564 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.564 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.564 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.564 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:24.564 14:06:27 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:24.564 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.565 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.565 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.565 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:24.565 14:06:27 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:24.565 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.565 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.565 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:24.565 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:24.565 14:06:27 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:24.565 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.565 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.565 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:24.565 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:24.565 14:06:27 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:24.565 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.565 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.565 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.565 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:24.565 14:06:27 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:24.565 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.565 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.565 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.565 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:24.565 14:06:27 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:24.565 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.565 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.565 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.565 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:24.565 14:06:27 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:24.565 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.565 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.565 14:06:27 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:24.565 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:24.565 14:06:27 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:24.565 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.565 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.565 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.565 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:24.565 14:06:27 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:24.565 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.565 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.565 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.565 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:24.565 14:06:27 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:24.565 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.565 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.565 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.565 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:24.565 14:06:27 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:24.565 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.565 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.565 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.565 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:24.565 14:06:27 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:24.565 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.565 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.565 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.565 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:24.565 14:06:27 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:24.565 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.565 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.565 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.565 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:24.565 14:06:27 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:24.565 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.565 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.565 14:06:27 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:24.565 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:24.565 14:06:27 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:24.565 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.565 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.565 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:24.565 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:24.565 14:06:27 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:24.565 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.565 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.565 14:06:27 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:24.565 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:24.565 14:06:27 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:24.565 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.565 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.565 14:06:27 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:24.565 14:06:27 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:24.565 14:06:27 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:24.565 14:06:27 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:09.0 00:11:24.565 14:06:27 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:24.565 14:06:27 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:24.565 14:06:27 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:24.565 14:06:27 -- nvme/functions.sh@49 -- # pci=0000:00:08.0 00:11:24.565 14:06:27 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:08.0 00:11:24.565 14:06:27 -- scripts/common.sh@15 -- # local i 00:11:24.565 14:06:27 -- scripts/common.sh@18 -- # [[ =~ 0000:00:08.0 ]] 00:11:24.565 14:06:27 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:24.565 14:06:27 -- scripts/common.sh@24 -- # return 0 00:11:24.565 14:06:27 -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:24.565 14:06:27 -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:24.565 14:06:27 -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:24.565 14:06:27 -- nvme/functions.sh@18 -- # shift 00:11:24.565 14:06:27 -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:24.565 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.565 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.565 14:06:27 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:24.565 14:06:27 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:24.565 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.565 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.565 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:24.565 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:24.565 14:06:27 -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:24.565 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.565 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.565 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:24.565 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:24.565 14:06:27 -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:24.565 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.565 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.565 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:24.565 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12342 "' 00:11:24.565 14:06:27 -- nvme/functions.sh@23 -- # nvme1[sn]='12342 ' 00:11:24.565 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.565 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.565 14:06:27 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:24.565 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:24.565 14:06:27 -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:24.565 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.565 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.565 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:24.565 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:24.565 14:06:27 -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:24.565 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.565 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.565 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:24.565 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:24.565 14:06:27 -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:24.565 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.565 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.565 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:24.565 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:24.565 14:06:27 -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:24.565 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.565 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.565 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.565 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:24.565 14:06:27 -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:24.565 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.565 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.565 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:24.565 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:24.565 14:06:27 -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:24.565 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.565 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.565 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.565 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:24.565 14:06:27 -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:24.565 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.565 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.565 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:24.565 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:24.565 14:06:27 -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:24.565 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.565 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.565 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.565 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:24.565 14:06:27 -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:24.565 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.566 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.566 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.566 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.566 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.566 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.566 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.566 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.566 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.566 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.566 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.566 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.566 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.566 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.566 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.566 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.566 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.566 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.566 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.566 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.566 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.566 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.566 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.566 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.566 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.566 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.566 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.566 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.566 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.566 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.566 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.566 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.566 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.566 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.566 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:24.566 14:06:27 -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:24.566 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.567 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.567 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.567 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:24.567 14:06:27 -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:24.567 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.567 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.567 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.567 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:24.567 14:06:27 -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:24.567 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.567 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.567 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.567 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:24.567 14:06:27 -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:24.567 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.567 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.567 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.567 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:24.567 14:06:27 -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:24.567 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.567 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.567 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.567 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:24.567 14:06:27 -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:24.567 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.567 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.567 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.567 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:24.567 14:06:27 -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:24.567 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.567 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.567 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.567 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:24.567 14:06:27 -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:24.567 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.567 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.567 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.567 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:24.567 14:06:27 -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:24.567 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.567 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.567 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.567 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:24.567 14:06:27 -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:24.567 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.567 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.567 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.567 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:24.567 14:06:27 -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:24.567 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.567 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.567 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.567 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:24.567 14:06:27 -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:24.567 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.567 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.567 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.567 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:24.567 14:06:27 -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:24.567 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.567 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.567 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.567 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:24.567 14:06:27 -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:24.567 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.567 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.567 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.567 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:24.567 14:06:27 -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:24.567 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.567 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.567 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:24.567 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:24.567 14:06:27 -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:24.567 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.567 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.567 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:24.567 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:24.567 14:06:27 -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:24.567 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.567 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.567 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.567 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:24.567 14:06:27 -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:24.567 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.567 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.567 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:24.567 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:24.567 14:06:27 -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:24.567 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.567 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.567 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:24.567 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:24.567 14:06:27 -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:24.567 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.567 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.567 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.567 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:24.567 14:06:27 -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:24.567 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.567 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.567 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.567 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:24.567 14:06:27 -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:24.567 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.567 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.567 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:24.567 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:24.567 14:06:27 -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:24.567 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.567 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.567 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.567 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:24.567 14:06:27 -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:24.567 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.567 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.567 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.567 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:24.567 14:06:27 -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:24.567 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.567 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.567 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.567 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:24.567 14:06:27 -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:24.567 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.567 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.567 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.567 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:24.567 14:06:27 -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:24.567 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.567 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.567 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.567 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:24.567 14:06:27 -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:24.567 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.567 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.567 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:24.567 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:24.567 14:06:27 -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:24.567 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.567 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.567 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:24.568 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:24.568 14:06:27 -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:24.568 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.568 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.568 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.568 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:24.568 14:06:27 -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:24.568 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.568 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.568 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.568 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:24.568 14:06:27 -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:24.568 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.568 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.568 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.568 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:24.568 14:06:27 -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:24.568 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.568 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.568 14:06:27 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:24.568 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:24.568 14:06:27 -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12342 00:11:24.568 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.568 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.568 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.568 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:24.568 14:06:27 -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:24.568 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.568 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.568 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.568 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:24.568 14:06:27 -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:24.568 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.568 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.568 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.568 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:24.568 14:06:27 -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:24.568 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.568 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.568 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.568 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:24.568 14:06:27 -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:24.568 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.568 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.568 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.568 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:24.568 14:06:27 -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:24.568 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.568 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.568 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.568 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:24.568 14:06:27 -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:24.568 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.568 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.568 14:06:27 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:24.568 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:24.568 14:06:27 -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:24.568 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.568 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.568 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:24.568 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:24.568 14:06:27 -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:24.568 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.568 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.568 14:06:27 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:24.568 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:24.568 14:06:27 -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:24.568 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.568 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.568 14:06:27 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:24.568 14:06:27 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:24.568 14:06:27 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:24.568 14:06:27 -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:24.568 14:06:27 -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:24.568 14:06:27 -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:24.568 14:06:27 -- nvme/functions.sh@18 -- # shift 00:11:24.568 14:06:27 -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:24.568 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.568 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.568 14:06:27 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:24.568 14:06:27 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:24.568 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.568 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.568 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:24.568 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x100000"' 00:11:24.568 14:06:27 -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x100000 00:11:24.568 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.568 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.568 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:24.568 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x100000"' 00:11:24.568 14:06:27 -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x100000 00:11:24.568 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.568 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.568 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:24.568 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x100000"' 00:11:24.568 14:06:27 -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x100000 00:11:24.568 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.568 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.568 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:24.568 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:24.568 14:06:27 -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:24.568 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.568 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.568 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:24.568 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:24.568 14:06:27 -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:24.568 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.568 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.568 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:24.568 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x4"' 00:11:24.568 14:06:27 -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x4 00:11:24.568 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.568 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.568 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:24.568 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:24.568 14:06:27 -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:24.568 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.568 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.568 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:24.568 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:24.568 14:06:27 -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:24.568 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.568 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.568 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.568 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:24.568 14:06:27 -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:24.568 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.568 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.568 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.568 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:24.568 14:06:27 -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:24.568 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.568 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.568 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.568 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.569 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.569 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.569 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.569 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.569 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.569 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.569 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.569 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.569 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.569 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.569 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.569 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.569 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.569 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.569 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.569 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.569 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.569 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.569 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.569 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.569 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.569 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.569 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.569 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.569 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.569 14:06:27 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.569 14:06:27 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.569 14:06:27 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.569 14:06:27 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.569 14:06:27 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.569 14:06:27 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.569 14:06:27 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:24.569 14:06:27 -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.569 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.570 14:06:27 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:24.570 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:24.570 14:06:27 -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.570 14:06:27 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:24.570 14:06:27 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:24.570 14:06:27 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n2 ]] 00:11:24.570 14:06:27 -- nvme/functions.sh@56 -- # ns_dev=nvme1n2 00:11:24.570 14:06:27 -- nvme/functions.sh@57 -- # nvme_get nvme1n2 id-ns /dev/nvme1n2 00:11:24.570 14:06:27 -- nvme/functions.sh@17 -- # local ref=nvme1n2 reg val 00:11:24.570 14:06:27 -- nvme/functions.sh@18 -- # shift 00:11:24.570 14:06:27 -- nvme/functions.sh@20 -- # local -gA 'nvme1n2=()' 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.570 14:06:27 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n2 00:11:24.570 14:06:27 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.570 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:24.570 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsze]="0x100000"' 00:11:24.570 14:06:27 -- nvme/functions.sh@23 -- # nvme1n2[nsze]=0x100000 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.570 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:24.570 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n2[ncap]="0x100000"' 00:11:24.570 14:06:27 -- nvme/functions.sh@23 -- # nvme1n2[ncap]=0x100000 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.570 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:24.570 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nuse]="0x100000"' 00:11:24.570 14:06:27 -- nvme/functions.sh@23 -- # nvme1n2[nuse]=0x100000 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.570 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:24.570 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsfeat]="0x14"' 00:11:24.570 14:06:27 -- nvme/functions.sh@23 -- # nvme1n2[nsfeat]=0x14 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.570 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:24.570 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nlbaf]="7"' 00:11:24.570 14:06:27 -- nvme/functions.sh@23 -- # nvme1n2[nlbaf]=7 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.570 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:24.570 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n2[flbas]="0x4"' 00:11:24.570 14:06:27 -- nvme/functions.sh@23 -- # nvme1n2[flbas]=0x4 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.570 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:24.570 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mc]="0x3"' 00:11:24.570 14:06:27 -- nvme/functions.sh@23 -- # nvme1n2[mc]=0x3 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.570 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:24.570 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dpc]="0x1f"' 00:11:24.570 14:06:27 -- nvme/functions.sh@23 -- # nvme1n2[dpc]=0x1f 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.570 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.570 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dps]="0"' 00:11:24.570 14:06:27 -- nvme/functions.sh@23 -- # nvme1n2[dps]=0 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.570 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.570 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nmic]="0"' 00:11:24.570 14:06:27 -- nvme/functions.sh@23 -- # nvme1n2[nmic]=0 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.570 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.570 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n2[rescap]="0"' 00:11:24.570 14:06:27 -- nvme/functions.sh@23 -- # nvme1n2[rescap]=0 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.570 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.570 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n2[fpi]="0"' 00:11:24.570 14:06:27 -- nvme/functions.sh@23 -- # nvme1n2[fpi]=0 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.570 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:24.570 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dlfeat]="1"' 00:11:24.570 14:06:27 -- nvme/functions.sh@23 -- # nvme1n2[dlfeat]=1 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.570 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.570 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawun]="0"' 00:11:24.570 14:06:27 -- nvme/functions.sh@23 -- # nvme1n2[nawun]=0 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.570 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.570 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawupf]="0"' 00:11:24.570 14:06:27 -- nvme/functions.sh@23 -- # nvme1n2[nawupf]=0 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.570 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.570 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nacwu]="0"' 00:11:24.570 14:06:27 -- nvme/functions.sh@23 -- # nvme1n2[nacwu]=0 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.570 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.570 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabsn]="0"' 00:11:24.570 14:06:27 -- nvme/functions.sh@23 -- # nvme1n2[nabsn]=0 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.570 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.570 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabo]="0"' 00:11:24.570 14:06:27 -- nvme/functions.sh@23 -- # nvme1n2[nabo]=0 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.570 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.570 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabspf]="0"' 00:11:24.570 14:06:27 -- nvme/functions.sh@23 -- # nvme1n2[nabspf]=0 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.570 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.570 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n2[noiob]="0"' 00:11:24.570 14:06:27 -- nvme/functions.sh@23 -- # nvme1n2[noiob]=0 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.570 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.570 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmcap]="0"' 00:11:24.570 14:06:27 -- nvme/functions.sh@23 -- # nvme1n2[nvmcap]=0 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.570 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.570 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwg]="0"' 00:11:24.570 14:06:27 -- nvme/functions.sh@23 -- # nvme1n2[npwg]=0 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.570 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.570 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwa]="0"' 00:11:24.570 14:06:27 -- nvme/functions.sh@23 -- # nvme1n2[npwa]=0 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.570 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.570 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npdg]="0"' 00:11:24.570 14:06:27 -- nvme/functions.sh@23 -- # nvme1n2[npdg]=0 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.570 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.570 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npda]="0"' 00:11:24.570 14:06:27 -- nvme/functions.sh@23 -- # nvme1n2[npda]=0 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.570 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.570 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nows]="0"' 00:11:24.570 14:06:27 -- nvme/functions.sh@23 -- # nvme1n2[nows]=0 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.570 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:24.570 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mssrl]="128"' 00:11:24.570 14:06:27 -- nvme/functions.sh@23 -- # nvme1n2[mssrl]=128 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.570 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:24.570 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mcl]="128"' 00:11:24.570 14:06:27 -- nvme/functions.sh@23 -- # nvme1n2[mcl]=128 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.570 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:24.570 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n2[msrc]="127"' 00:11:24.570 14:06:27 -- nvme/functions.sh@23 -- # nvme1n2[msrc]=127 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.570 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.570 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nulbaf]="0"' 00:11:24.570 14:06:27 -- nvme/functions.sh@23 -- # nvme1n2[nulbaf]=0 00:11:24.570 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.571 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.571 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n2[anagrpid]="0"' 00:11:24.571 14:06:27 -- nvme/functions.sh@23 -- # nvme1n2[anagrpid]=0 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.571 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.571 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsattr]="0"' 00:11:24.571 14:06:27 -- nvme/functions.sh@23 -- # nvme1n2[nsattr]=0 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.571 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.571 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmsetid]="0"' 00:11:24.571 14:06:27 -- nvme/functions.sh@23 -- # nvme1n2[nvmsetid]=0 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.571 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.571 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n2[endgid]="0"' 00:11:24.571 14:06:27 -- nvme/functions.sh@23 -- # nvme1n2[endgid]=0 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.571 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:24.571 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nguid]="00000000000000000000000000000000"' 00:11:24.571 14:06:27 -- nvme/functions.sh@23 -- # nvme1n2[nguid]=00000000000000000000000000000000 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.571 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:24.571 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n2[eui64]="0000000000000000"' 00:11:24.571 14:06:27 -- nvme/functions.sh@23 -- # nvme1n2[eui64]=0000000000000000 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.571 14:06:27 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:24.571 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:24.571 14:06:27 -- nvme/functions.sh@23 -- # nvme1n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.571 14:06:27 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:24.571 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:24.571 14:06:27 -- nvme/functions.sh@23 -- # nvme1n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.571 14:06:27 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:24.571 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:24.571 14:06:27 -- nvme/functions.sh@23 -- # nvme1n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.571 14:06:27 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:24.571 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:24.571 14:06:27 -- nvme/functions.sh@23 -- # nvme1n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.571 14:06:27 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:24.571 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:24.571 14:06:27 -- nvme/functions.sh@23 -- # nvme1n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.571 14:06:27 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:24.571 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:24.571 14:06:27 -- nvme/functions.sh@23 -- # nvme1n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.571 14:06:27 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:24.571 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:24.571 14:06:27 -- nvme/functions.sh@23 -- # nvme1n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.571 14:06:27 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:24.571 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:24.571 14:06:27 -- nvme/functions.sh@23 -- # nvme1n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.571 14:06:27 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n2 00:11:24.571 14:06:27 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:24.571 14:06:27 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n3 ]] 00:11:24.571 14:06:27 -- nvme/functions.sh@56 -- # ns_dev=nvme1n3 00:11:24.571 14:06:27 -- nvme/functions.sh@57 -- # nvme_get nvme1n3 id-ns /dev/nvme1n3 00:11:24.571 14:06:27 -- nvme/functions.sh@17 -- # local ref=nvme1n3 reg val 00:11:24.571 14:06:27 -- nvme/functions.sh@18 -- # shift 00:11:24.571 14:06:27 -- nvme/functions.sh@20 -- # local -gA 'nvme1n3=()' 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.571 14:06:27 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n3 00:11:24.571 14:06:27 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.571 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:24.571 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsze]="0x100000"' 00:11:24.571 14:06:27 -- nvme/functions.sh@23 -- # nvme1n3[nsze]=0x100000 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.571 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:24.571 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[ncap]="0x100000"' 00:11:24.571 14:06:27 -- nvme/functions.sh@23 -- # nvme1n3[ncap]=0x100000 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.571 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:24.571 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nuse]="0x100000"' 00:11:24.571 14:06:27 -- nvme/functions.sh@23 -- # nvme1n3[nuse]=0x100000 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.571 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:24.571 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsfeat]="0x14"' 00:11:24.571 14:06:27 -- nvme/functions.sh@23 -- # nvme1n3[nsfeat]=0x14 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.571 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:24.571 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nlbaf]="7"' 00:11:24.571 14:06:27 -- nvme/functions.sh@23 -- # nvme1n3[nlbaf]=7 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.571 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:24.571 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[flbas]="0x4"' 00:11:24.571 14:06:27 -- nvme/functions.sh@23 -- # nvme1n3[flbas]=0x4 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.571 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:24.571 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mc]="0x3"' 00:11:24.571 14:06:27 -- nvme/functions.sh@23 -- # nvme1n3[mc]=0x3 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.571 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:24.571 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dpc]="0x1f"' 00:11:24.571 14:06:27 -- nvme/functions.sh@23 -- # nvme1n3[dpc]=0x1f 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.571 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.571 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dps]="0"' 00:11:24.571 14:06:27 -- nvme/functions.sh@23 -- # nvme1n3[dps]=0 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.571 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.571 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nmic]="0"' 00:11:24.571 14:06:27 -- nvme/functions.sh@23 -- # nvme1n3[nmic]=0 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.571 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.571 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[rescap]="0"' 00:11:24.571 14:06:27 -- nvme/functions.sh@23 -- # nvme1n3[rescap]=0 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.571 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.571 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[fpi]="0"' 00:11:24.571 14:06:27 -- nvme/functions.sh@23 -- # nvme1n3[fpi]=0 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.571 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:24.571 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dlfeat]="1"' 00:11:24.571 14:06:27 -- nvme/functions.sh@23 -- # nvme1n3[dlfeat]=1 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.571 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.571 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawun]="0"' 00:11:24.571 14:06:27 -- nvme/functions.sh@23 -- # nvme1n3[nawun]=0 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.571 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.571 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawupf]="0"' 00:11:24.571 14:06:27 -- nvme/functions.sh@23 -- # nvme1n3[nawupf]=0 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.571 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.571 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.571 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nacwu]="0"' 00:11:24.572 14:06:27 -- nvme/functions.sh@23 -- # nvme1n3[nacwu]=0 00:11:24.572 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.572 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.572 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.572 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabsn]="0"' 00:11:24.572 14:06:27 -- nvme/functions.sh@23 -- # nvme1n3[nabsn]=0 00:11:24.572 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.572 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.572 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.572 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabo]="0"' 00:11:24.572 14:06:27 -- nvme/functions.sh@23 -- # nvme1n3[nabo]=0 00:11:24.572 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.572 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.572 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.572 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabspf]="0"' 00:11:24.572 14:06:27 -- nvme/functions.sh@23 -- # nvme1n3[nabspf]=0 00:11:24.572 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.572 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.572 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.572 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[noiob]="0"' 00:11:24.572 14:06:27 -- nvme/functions.sh@23 -- # nvme1n3[noiob]=0 00:11:24.572 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.572 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.572 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.572 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmcap]="0"' 00:11:24.572 14:06:27 -- nvme/functions.sh@23 -- # nvme1n3[nvmcap]=0 00:11:24.572 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.572 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.572 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.572 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwg]="0"' 00:11:24.572 14:06:27 -- nvme/functions.sh@23 -- # nvme1n3[npwg]=0 00:11:24.572 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.572 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.572 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.572 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwa]="0"' 00:11:24.572 14:06:27 -- nvme/functions.sh@23 -- # nvme1n3[npwa]=0 00:11:24.572 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.572 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.572 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.572 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npdg]="0"' 00:11:24.572 14:06:27 -- nvme/functions.sh@23 -- # nvme1n3[npdg]=0 00:11:24.572 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.572 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.572 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.572 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npda]="0"' 00:11:24.572 14:06:27 -- nvme/functions.sh@23 -- # nvme1n3[npda]=0 00:11:24.572 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.572 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.572 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.572 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nows]="0"' 00:11:24.572 14:06:27 -- nvme/functions.sh@23 -- # nvme1n3[nows]=0 00:11:24.572 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.572 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.572 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:24.572 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mssrl]="128"' 00:11:24.572 14:06:27 -- nvme/functions.sh@23 -- # nvme1n3[mssrl]=128 00:11:24.572 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.572 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.572 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:24.572 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mcl]="128"' 00:11:24.572 14:06:27 -- nvme/functions.sh@23 -- # nvme1n3[mcl]=128 00:11:24.572 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.572 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.572 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:24.572 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[msrc]="127"' 00:11:24.572 14:06:27 -- nvme/functions.sh@23 -- # nvme1n3[msrc]=127 00:11:24.572 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.572 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.572 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.572 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nulbaf]="0"' 00:11:24.572 14:06:27 -- nvme/functions.sh@23 -- # nvme1n3[nulbaf]=0 00:11:24.572 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.572 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.572 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.572 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[anagrpid]="0"' 00:11:24.572 14:06:27 -- nvme/functions.sh@23 -- # nvme1n3[anagrpid]=0 00:11:24.572 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.572 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.572 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.572 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsattr]="0"' 00:11:24.572 14:06:27 -- nvme/functions.sh@23 -- # nvme1n3[nsattr]=0 00:11:24.572 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.572 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.572 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.572 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmsetid]="0"' 00:11:24.572 14:06:27 -- nvme/functions.sh@23 -- # nvme1n3[nvmsetid]=0 00:11:24.572 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.572 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.572 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.572 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[endgid]="0"' 00:11:24.572 14:06:27 -- nvme/functions.sh@23 -- # nvme1n3[endgid]=0 00:11:24.572 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.572 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.572 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:24.572 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nguid]="00000000000000000000000000000000"' 00:11:24.572 14:06:27 -- nvme/functions.sh@23 -- # nvme1n3[nguid]=00000000000000000000000000000000 00:11:24.572 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.572 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.572 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:24.572 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[eui64]="0000000000000000"' 00:11:24.572 14:06:27 -- nvme/functions.sh@23 -- # nvme1n3[eui64]=0000000000000000 00:11:24.572 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.572 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.572 14:06:27 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:24.572 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:24.572 14:06:27 -- nvme/functions.sh@23 -- # nvme1n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:24.572 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.572 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.572 14:06:27 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:24.572 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:24.572 14:06:27 -- nvme/functions.sh@23 -- # nvme1n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:24.572 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.572 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.572 14:06:27 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:24.572 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:24.572 14:06:27 -- nvme/functions.sh@23 -- # nvme1n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:24.572 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.572 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.572 14:06:27 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:24.572 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:24.572 14:06:27 -- nvme/functions.sh@23 -- # nvme1n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:24.572 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.572 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.572 14:06:27 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:24.572 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:24.572 14:06:27 -- nvme/functions.sh@23 -- # nvme1n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:24.572 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.572 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.572 14:06:27 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:24.572 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:24.572 14:06:27 -- nvme/functions.sh@23 -- # nvme1n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:24.572 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.572 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.572 14:06:27 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:24.572 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:24.572 14:06:27 -- nvme/functions.sh@23 -- # nvme1n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:24.572 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.572 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.572 14:06:27 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:24.572 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:24.573 14:06:27 -- nvme/functions.sh@23 -- # nvme1n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:24.573 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.573 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.573 14:06:27 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n3 00:11:24.573 14:06:27 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:24.573 14:06:27 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:24.573 14:06:27 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:08.0 00:11:24.573 14:06:27 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:24.573 14:06:27 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:24.573 14:06:27 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:24.573 14:06:27 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:11:24.573 14:06:27 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:11:24.573 14:06:27 -- scripts/common.sh@15 -- # local i 00:11:24.573 14:06:27 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:11:24.573 14:06:27 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:24.573 14:06:27 -- scripts/common.sh@24 -- # return 0 00:11:24.573 14:06:27 -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:24.573 14:06:27 -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:24.573 14:06:27 -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:24.573 14:06:27 -- nvme/functions.sh@18 -- # shift 00:11:24.573 14:06:27 -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:24.573 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.573 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.573 14:06:27 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:24.573 14:06:27 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:24.573 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.573 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.573 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:24.573 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:24.573 14:06:27 -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:24.573 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.573 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.573 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:24.573 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:24.573 14:06:27 -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:24.573 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.573 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.573 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:24.573 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12340 "' 00:11:24.573 14:06:27 -- nvme/functions.sh@23 -- # nvme2[sn]='12340 ' 00:11:24.573 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.573 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.573 14:06:27 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:24.573 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:24.573 14:06:27 -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:24.573 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.573 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.573 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:24.573 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:24.573 14:06:27 -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:24.573 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.573 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.573 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:24.573 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:24.573 14:06:27 -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:24.573 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.573 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.573 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:24.573 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:24.573 14:06:27 -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:24.573 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.573 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.573 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.573 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:24.573 14:06:27 -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:24.573 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.573 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.573 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:24.573 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:24.573 14:06:27 -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:24.573 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.573 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.573 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.573 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:24.573 14:06:27 -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:24.573 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.573 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.573 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:24.573 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:24.573 14:06:27 -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:24.573 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.573 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.573 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.573 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:24.573 14:06:27 -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:24.573 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.573 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.573 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.573 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:24.573 14:06:27 -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:24.573 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.573 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.573 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:24.573 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:24.573 14:06:27 -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:24.573 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.573 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.573 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:24.573 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:24.573 14:06:27 -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:24.573 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.573 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.573 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.573 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:24.573 14:06:27 -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:24.573 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.573 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.573 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:24.573 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:24.573 14:06:27 -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:24.573 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.573 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.573 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:24.573 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:24.573 14:06:27 -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:24.573 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.573 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.573 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.573 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:24.573 14:06:27 -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:24.573 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.573 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.573 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.573 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:24.573 14:06:27 -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:24.573 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.573 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.573 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.573 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:24.573 14:06:27 -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:24.573 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.573 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.573 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.573 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:24.573 14:06:27 -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:24.573 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.573 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.573 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.573 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:24.573 14:06:27 -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:24.573 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.573 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.573 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.573 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:24.573 14:06:27 -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:24.573 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.573 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.573 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:24.573 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:24.573 14:06:27 -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:24.573 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.573 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.573 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:24.573 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:24.573 14:06:27 -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:24.573 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.573 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.573 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:24.573 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:24.573 14:06:27 -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:24.573 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.573 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.573 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:24.573 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:24.573 14:06:27 -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:24.573 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.574 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.574 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.574 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.574 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.574 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.574 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.574 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.574 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.574 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.574 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.574 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.574 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.574 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.574 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.574 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.574 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.574 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.574 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.574 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.574 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.574 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.574 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.574 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.574 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.574 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.574 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.574 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.574 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.574 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.574 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.574 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.574 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.574 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.574 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:24.574 14:06:27 -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.574 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.574 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.575 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:24.575 14:06:27 -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.575 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:24.575 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:24.575 14:06:27 -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.575 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:24.575 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:24.575 14:06:27 -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.575 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.575 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:24.575 14:06:27 -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.575 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.575 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:24.575 14:06:27 -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.575 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:24.575 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:24.575 14:06:27 -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.575 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.575 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:24.575 14:06:27 -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.575 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.575 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:24.575 14:06:27 -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.575 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.575 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:24.575 14:06:27 -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.575 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.575 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:24.575 14:06:27 -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.575 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.575 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:24.575 14:06:27 -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.575 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:24.575 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:24.575 14:06:27 -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.575 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:24.575 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:24.575 14:06:27 -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.575 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.575 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:24.575 14:06:27 -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.575 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.575 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:24.575 14:06:27 -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.575 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.575 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:24.575 14:06:27 -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.575 14:06:27 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:24.575 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:24.575 14:06:27 -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12340 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.575 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.575 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:24.575 14:06:27 -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.575 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.575 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:24.575 14:06:27 -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.575 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.575 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:24.575 14:06:27 -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.575 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.575 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:24.575 14:06:27 -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.575 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.575 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:24.575 14:06:27 -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.575 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.575 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:24.575 14:06:27 -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.575 14:06:27 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:24.575 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:24.575 14:06:27 -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.575 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:24.575 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:24.575 14:06:27 -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.575 14:06:27 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:24.575 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:24.575 14:06:27 -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.575 14:06:27 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:24.575 14:06:27 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:24.575 14:06:27 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:24.575 14:06:27 -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:24.575 14:06:27 -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:24.575 14:06:27 -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:24.575 14:06:27 -- nvme/functions.sh@18 -- # shift 00:11:24.575 14:06:27 -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.575 14:06:27 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:24.575 14:06:27 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.575 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:24.575 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x17a17a"' 00:11:24.575 14:06:27 -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x17a17a 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.575 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:24.575 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x17a17a"' 00:11:24.575 14:06:27 -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x17a17a 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.575 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:24.575 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x17a17a"' 00:11:24.575 14:06:27 -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x17a17a 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.575 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:24.575 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:24.575 14:06:27 -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.575 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:24.575 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:24.575 14:06:27 -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:24.575 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.576 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x7"' 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x7 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.576 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.576 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.576 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.576 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.576 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.576 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.576 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.576 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.576 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.576 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.576 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.576 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.576 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.576 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.576 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.576 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.576 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.576 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.576 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.576 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.576 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.576 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.576 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.576 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.576 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.576 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.576 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.576 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.576 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.576 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.576 14:06:27 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.576 14:06:27 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:24.576 14:06:27 -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.576 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.576 14:06:27 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:24.577 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:24.577 14:06:27 -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:24.577 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.577 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.577 14:06:27 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:24.577 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:24.577 14:06:27 -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:24.577 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.577 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.577 14:06:27 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:24.577 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:24.577 14:06:27 -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:24.577 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.577 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.577 14:06:27 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:24.577 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:24.577 14:06:27 -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:24.577 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.577 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.577 14:06:27 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:24.577 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:24.577 14:06:27 -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:24.577 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.577 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.577 14:06:27 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:24.577 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:24.577 14:06:27 -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:24.577 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.577 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.577 14:06:27 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:24.577 14:06:27 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:24.577 14:06:27 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:24.577 14:06:27 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:11:24.577 14:06:27 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:24.577 14:06:27 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:24.577 14:06:27 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:24.577 14:06:27 -- nvme/functions.sh@49 -- # pci=0000:00:07.0 00:11:24.577 14:06:27 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:07.0 00:11:24.577 14:06:27 -- scripts/common.sh@15 -- # local i 00:11:24.577 14:06:27 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:11:24.577 14:06:27 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:24.577 14:06:27 -- scripts/common.sh@24 -- # return 0 00:11:24.577 14:06:27 -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:24.577 14:06:27 -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:24.577 14:06:27 -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:24.577 14:06:27 -- nvme/functions.sh@18 -- # shift 00:11:24.577 14:06:27 -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:24.577 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.577 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.577 14:06:27 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:24.577 14:06:27 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:24.577 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.577 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.577 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:24.577 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:24.577 14:06:27 -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:24.577 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.577 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.577 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:24.577 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:24.577 14:06:27 -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:24.577 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.577 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.577 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:24.577 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12341 "' 00:11:24.577 14:06:27 -- nvme/functions.sh@23 -- # nvme3[sn]='12341 ' 00:11:24.577 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.577 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.577 14:06:27 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:24.577 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:24.577 14:06:27 -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:24.577 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.577 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.577 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:24.577 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:24.577 14:06:27 -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:24.577 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.577 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.577 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:24.577 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:24.577 14:06:27 -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:24.577 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.577 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.577 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:24.577 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:24.577 14:06:27 -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:24.577 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.577 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.577 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.577 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0"' 00:11:24.577 14:06:27 -- nvme/functions.sh@23 -- # nvme3[cmic]=0 00:11:24.577 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.577 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.577 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:24.577 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:24.577 14:06:27 -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:24.577 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.577 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.577 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.577 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:24.577 14:06:27 -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:24.577 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.577 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.577 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:24.577 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:24.577 14:06:27 -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:24.577 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.577 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.577 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.577 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:24.577 14:06:27 -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:24.577 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.577 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.577 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.577 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:24.577 14:06:27 -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:24.577 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.577 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.577 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:24.577 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:24.577 14:06:27 -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:24.577 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.577 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.577 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:24.577 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x8000"' 00:11:24.577 14:06:27 -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x8000 00:11:24.577 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.577 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.577 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.577 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:24.577 14:06:27 -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:24.577 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.577 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.577 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:24.577 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:24.577 14:06:27 -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:24.577 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.577 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.577 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:24.577 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.578 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.578 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.578 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.578 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.578 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.578 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.578 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.578 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.578 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.578 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.578 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.578 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.578 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.578 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.578 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.578 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.578 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.578 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.578 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.578 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.578 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.578 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.578 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.578 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.578 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.578 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.578 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.578 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.578 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.578 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.578 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.578 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.578 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.578 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.578 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:24.578 14:06:27 -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.579 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="0"' 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # nvme3[endgidmax]=0 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.579 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.579 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.579 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.579 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.579 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.579 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.579 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.579 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.579 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.579 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.579 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.579 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.579 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.579 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.579 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.579 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.579 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.579 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.579 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.579 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.579 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.579 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.579 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.579 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.579 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.579 14:06:27 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:12341 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.579 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.579 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.579 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.579 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.579 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.579 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.579 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.579 14:06:27 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:24.579 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:24.580 14:06:27 -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.580 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:24.580 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:24.580 14:06:27 -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.580 14:06:27 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:24.580 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:24.580 14:06:27 -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.580 14:06:27 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:24.580 14:06:27 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:24.580 14:06:27 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme3/nvme3n1 ]] 00:11:24.580 14:06:27 -- nvme/functions.sh@56 -- # ns_dev=nvme3n1 00:11:24.580 14:06:27 -- nvme/functions.sh@57 -- # nvme_get nvme3n1 id-ns /dev/nvme3n1 00:11:24.580 14:06:27 -- nvme/functions.sh@17 -- # local ref=nvme3n1 reg val 00:11:24.580 14:06:27 -- nvme/functions.sh@18 -- # shift 00:11:24.580 14:06:27 -- nvme/functions.sh@20 -- # local -gA 'nvme3n1=()' 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.580 14:06:27 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme3n1 00:11:24.580 14:06:27 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.580 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:24.580 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsze]="0x140000"' 00:11:24.580 14:06:27 -- nvme/functions.sh@23 -- # nvme3n1[nsze]=0x140000 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.580 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:24.580 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[ncap]="0x140000"' 00:11:24.580 14:06:27 -- nvme/functions.sh@23 -- # nvme3n1[ncap]=0x140000 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.580 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:24.580 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nuse]="0x140000"' 00:11:24.580 14:06:27 -- nvme/functions.sh@23 -- # nvme3n1[nuse]=0x140000 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.580 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:24.580 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsfeat]="0x14"' 00:11:24.580 14:06:27 -- nvme/functions.sh@23 -- # nvme3n1[nsfeat]=0x14 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.580 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:24.580 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nlbaf]="7"' 00:11:24.580 14:06:27 -- nvme/functions.sh@23 -- # nvme3n1[nlbaf]=7 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.580 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:24.580 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[flbas]="0x4"' 00:11:24.580 14:06:27 -- nvme/functions.sh@23 -- # nvme3n1[flbas]=0x4 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.580 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:24.580 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mc]="0x3"' 00:11:24.580 14:06:27 -- nvme/functions.sh@23 -- # nvme3n1[mc]=0x3 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.580 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:24.580 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dpc]="0x1f"' 00:11:24.580 14:06:27 -- nvme/functions.sh@23 -- # nvme3n1[dpc]=0x1f 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.580 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.580 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dps]="0"' 00:11:24.580 14:06:27 -- nvme/functions.sh@23 -- # nvme3n1[dps]=0 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.580 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.580 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nmic]="0"' 00:11:24.580 14:06:27 -- nvme/functions.sh@23 -- # nvme3n1[nmic]=0 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.580 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.580 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[rescap]="0"' 00:11:24.580 14:06:27 -- nvme/functions.sh@23 -- # nvme3n1[rescap]=0 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.580 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.580 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[fpi]="0"' 00:11:24.580 14:06:27 -- nvme/functions.sh@23 -- # nvme3n1[fpi]=0 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.580 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:24.580 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dlfeat]="1"' 00:11:24.580 14:06:27 -- nvme/functions.sh@23 -- # nvme3n1[dlfeat]=1 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.580 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.580 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawun]="0"' 00:11:24.580 14:06:27 -- nvme/functions.sh@23 -- # nvme3n1[nawun]=0 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.580 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.580 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawupf]="0"' 00:11:24.580 14:06:27 -- nvme/functions.sh@23 -- # nvme3n1[nawupf]=0 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.580 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.580 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nacwu]="0"' 00:11:24.580 14:06:27 -- nvme/functions.sh@23 -- # nvme3n1[nacwu]=0 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.580 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.580 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabsn]="0"' 00:11:24.580 14:06:27 -- nvme/functions.sh@23 -- # nvme3n1[nabsn]=0 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.580 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.580 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabo]="0"' 00:11:24.580 14:06:27 -- nvme/functions.sh@23 -- # nvme3n1[nabo]=0 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.580 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.580 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabspf]="0"' 00:11:24.580 14:06:27 -- nvme/functions.sh@23 -- # nvme3n1[nabspf]=0 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.580 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.580 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[noiob]="0"' 00:11:24.580 14:06:27 -- nvme/functions.sh@23 -- # nvme3n1[noiob]=0 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.580 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.580 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmcap]="0"' 00:11:24.580 14:06:27 -- nvme/functions.sh@23 -- # nvme3n1[nvmcap]=0 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.580 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.580 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwg]="0"' 00:11:24.580 14:06:27 -- nvme/functions.sh@23 -- # nvme3n1[npwg]=0 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.580 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.580 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwa]="0"' 00:11:24.580 14:06:27 -- nvme/functions.sh@23 -- # nvme3n1[npwa]=0 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.580 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.580 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npdg]="0"' 00:11:24.580 14:06:27 -- nvme/functions.sh@23 -- # nvme3n1[npdg]=0 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.580 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.580 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npda]="0"' 00:11:24.580 14:06:27 -- nvme/functions.sh@23 -- # nvme3n1[npda]=0 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.580 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.580 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nows]="0"' 00:11:24.580 14:06:27 -- nvme/functions.sh@23 -- # nvme3n1[nows]=0 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.580 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:24.580 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mssrl]="128"' 00:11:24.580 14:06:27 -- nvme/functions.sh@23 -- # nvme3n1[mssrl]=128 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.580 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:24.580 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mcl]="128"' 00:11:24.580 14:06:27 -- nvme/functions.sh@23 -- # nvme3n1[mcl]=128 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.580 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.580 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:24.581 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[msrc]="127"' 00:11:24.581 14:06:27 -- nvme/functions.sh@23 -- # nvme3n1[msrc]=127 00:11:24.581 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.581 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.581 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.581 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nulbaf]="0"' 00:11:24.581 14:06:27 -- nvme/functions.sh@23 -- # nvme3n1[nulbaf]=0 00:11:24.581 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.581 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.581 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.581 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[anagrpid]="0"' 00:11:24.581 14:06:27 -- nvme/functions.sh@23 -- # nvme3n1[anagrpid]=0 00:11:24.581 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.581 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.581 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.581 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsattr]="0"' 00:11:24.581 14:06:27 -- nvme/functions.sh@23 -- # nvme3n1[nsattr]=0 00:11:24.581 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.581 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.581 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.581 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmsetid]="0"' 00:11:24.581 14:06:27 -- nvme/functions.sh@23 -- # nvme3n1[nvmsetid]=0 00:11:24.581 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.581 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.581 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:24.581 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[endgid]="0"' 00:11:24.581 14:06:27 -- nvme/functions.sh@23 -- # nvme3n1[endgid]=0 00:11:24.581 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.581 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.581 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:24.581 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nguid]="00000000000000000000000000000000"' 00:11:24.581 14:06:27 -- nvme/functions.sh@23 -- # nvme3n1[nguid]=00000000000000000000000000000000 00:11:24.581 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.581 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.581 14:06:27 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:24.581 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[eui64]="0000000000000000"' 00:11:24.581 14:06:27 -- nvme/functions.sh@23 -- # nvme3n1[eui64]=0000000000000000 00:11:24.581 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.581 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.581 14:06:27 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:24.581 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:24.581 14:06:27 -- nvme/functions.sh@23 -- # nvme3n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:24.581 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.581 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.581 14:06:27 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:24.581 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:24.581 14:06:27 -- nvme/functions.sh@23 -- # nvme3n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:24.581 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.581 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.581 14:06:27 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:24.581 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:24.581 14:06:27 -- nvme/functions.sh@23 -- # nvme3n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:24.581 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.581 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.581 14:06:27 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:24.581 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:24.581 14:06:27 -- nvme/functions.sh@23 -- # nvme3n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:24.581 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.581 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.581 14:06:27 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:24.581 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:24.581 14:06:27 -- nvme/functions.sh@23 -- # nvme3n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:24.581 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.581 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.581 14:06:27 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:24.581 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:24.581 14:06:27 -- nvme/functions.sh@23 -- # nvme3n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:24.581 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.581 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.581 14:06:27 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:24.581 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:24.581 14:06:27 -- nvme/functions.sh@23 -- # nvme3n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:24.581 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.581 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.581 14:06:27 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:24.581 14:06:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:24.581 14:06:27 -- nvme/functions.sh@23 -- # nvme3n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:24.581 14:06:27 -- nvme/functions.sh@21 -- # IFS=: 00:11:24.581 14:06:27 -- nvme/functions.sh@21 -- # read -r reg val 00:11:24.581 14:06:27 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme3n1 00:11:24.581 14:06:27 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:24.581 14:06:27 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:24.581 14:06:27 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:07.0 00:11:24.581 14:06:27 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:24.581 14:06:27 -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:24.581 14:06:27 -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:11:24.581 14:06:27 -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:11:24.581 14:06:27 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:24.581 14:06:27 -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:11:24.581 14:06:27 -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:11:24.581 14:06:27 -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:11:24.581 14:06:27 -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:11:24.581 14:06:27 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:11:24.581 14:06:27 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:24.581 14:06:27 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:11:24.581 14:06:27 -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:11:24.581 14:06:27 -- nvme/functions.sh@184 -- # get_oncs nvme1 00:11:24.581 14:06:27 -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:11:24.581 14:06:27 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:11:24.581 14:06:27 -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:11:24.581 14:06:27 -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:24.581 14:06:27 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:24.581 14:06:27 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:24.581 14:06:27 -- nvme/functions.sh@76 -- # echo 0x15d 00:11:24.581 14:06:27 -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:24.581 14:06:27 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:24.581 14:06:27 -- nvme/functions.sh@197 -- # echo nvme1 00:11:24.581 14:06:27 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:24.581 14:06:27 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:11:24.581 14:06:27 -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:11:24.581 14:06:27 -- nvme/functions.sh@184 -- # get_oncs nvme0 00:11:24.581 14:06:27 -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:11:24.581 14:06:27 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:11:24.581 14:06:27 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:11:24.581 14:06:27 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:24.581 14:06:27 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:24.581 14:06:27 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:24.581 14:06:27 -- nvme/functions.sh@76 -- # echo 0x15d 00:11:24.581 14:06:27 -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:24.581 14:06:27 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:24.581 14:06:27 -- nvme/functions.sh@197 -- # echo nvme0 00:11:24.581 14:06:27 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:24.581 14:06:27 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:11:24.581 14:06:27 -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:11:24.581 14:06:27 -- nvme/functions.sh@184 -- # get_oncs nvme3 00:11:24.581 14:06:27 -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:11:24.581 14:06:27 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:11:24.581 14:06:27 -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:11:24.581 14:06:27 -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:24.581 14:06:27 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:24.581 14:06:27 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:24.581 14:06:27 -- nvme/functions.sh@76 -- # echo 0x15d 00:11:24.581 14:06:27 -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:24.581 14:06:27 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:24.581 14:06:27 -- nvme/functions.sh@197 -- # echo nvme3 00:11:24.581 14:06:27 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:24.581 14:06:27 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:11:24.581 14:06:27 -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:11:24.581 14:06:27 -- nvme/functions.sh@184 -- # get_oncs nvme2 00:11:24.581 14:06:27 -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:11:24.581 14:06:27 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:11:24.581 14:06:27 -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:11:24.581 14:06:27 -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:24.581 14:06:27 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:24.581 14:06:27 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:24.581 14:06:27 -- nvme/functions.sh@76 -- # echo 0x15d 00:11:24.581 14:06:27 -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:24.581 14:06:27 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:24.581 14:06:27 -- nvme/functions.sh@197 -- # echo nvme2 00:11:24.581 14:06:27 -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:11:24.581 14:06:27 -- nvme/functions.sh@206 -- # echo nvme1 00:11:24.581 14:06:27 -- nvme/functions.sh@207 -- # return 0 00:11:24.581 14:06:27 -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:11:24.581 14:06:27 -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:08.0 00:11:24.581 14:06:27 -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:25.525 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:25.525 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:11:25.525 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:11:25.525 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:11:25.786 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:11:25.786 14:06:28 -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:08.0' 00:11:25.786 14:06:28 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:25.786 14:06:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:25.786 14:06:28 -- common/autotest_common.sh@10 -- # set +x 00:11:25.786 ************************************ 00:11:25.786 START TEST nvme_simple_copy 00:11:25.786 ************************************ 00:11:25.786 14:06:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:08.0' 00:11:26.047 Initializing NVMe Controllers 00:11:26.047 Attaching to 0000:00:08.0 00:11:26.047 Controller supports SCC. Attached to 0000:00:08.0 00:11:26.047 Namespace ID: 1 size: 4GB 00:11:26.047 Initialization complete. 00:11:26.047 00:11:26.047 Controller QEMU NVMe Ctrl (12342 ) 00:11:26.047 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:11:26.047 Namespace Block Size:4096 00:11:26.047 Writing LBAs 0 to 63 with Random Data 00:11:26.047 Copied LBAs from 0 - 63 to the Destination LBA 256 00:11:26.047 LBAs matching Written Data: 64 00:11:26.047 00:11:26.047 real 0m0.280s 00:11:26.047 user 0m0.100s 00:11:26.047 sys 0m0.077s 00:11:26.047 14:06:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:26.047 ************************************ 00:11:26.047 END TEST nvme_simple_copy 00:11:26.047 ************************************ 00:11:26.047 14:06:28 -- common/autotest_common.sh@10 -- # set +x 00:11:26.047 00:11:26.047 real 0m7.954s 00:11:26.047 user 0m1.149s 00:11:26.047 sys 0m1.600s 00:11:26.047 14:06:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:26.047 ************************************ 00:11:26.047 END TEST nvme_scc 00:11:26.047 ************************************ 00:11:26.047 14:06:28 -- common/autotest_common.sh@10 -- # set +x 00:11:26.047 14:06:28 -- spdk/autotest.sh@216 -- # [[ 0 -eq 1 ]] 00:11:26.047 14:06:28 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:11:26.047 14:06:28 -- spdk/autotest.sh@222 -- # [[ '' -eq 1 ]] 00:11:26.047 14:06:28 -- spdk/autotest.sh@225 -- # [[ 1 -eq 1 ]] 00:11:26.047 14:06:28 -- spdk/autotest.sh@226 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:11:26.047 14:06:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:26.047 14:06:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:26.047 14:06:28 -- common/autotest_common.sh@10 -- # set +x 00:11:26.047 ************************************ 00:11:26.047 START TEST nvme_fdp 00:11:26.047 ************************************ 00:11:26.047 14:06:28 -- common/autotest_common.sh@1114 -- # test/nvme/nvme_fdp.sh 00:11:26.309 * Looking for test storage... 00:11:26.309 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:26.309 14:06:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:26.309 14:06:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:26.309 14:06:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:26.309 14:06:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:26.309 14:06:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:26.309 14:06:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:26.309 14:06:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:26.309 14:06:29 -- scripts/common.sh@335 -- # IFS=.-: 00:11:26.309 14:06:29 -- scripts/common.sh@335 -- # read -ra ver1 00:11:26.309 14:06:29 -- scripts/common.sh@336 -- # IFS=.-: 00:11:26.309 14:06:29 -- scripts/common.sh@336 -- # read -ra ver2 00:11:26.309 14:06:29 -- scripts/common.sh@337 -- # local 'op=<' 00:11:26.309 14:06:29 -- scripts/common.sh@339 -- # ver1_l=2 00:11:26.309 14:06:29 -- scripts/common.sh@340 -- # ver2_l=1 00:11:26.309 14:06:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:26.309 14:06:29 -- scripts/common.sh@343 -- # case "$op" in 00:11:26.309 14:06:29 -- scripts/common.sh@344 -- # : 1 00:11:26.309 14:06:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:26.309 14:06:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:26.309 14:06:29 -- scripts/common.sh@364 -- # decimal 1 00:11:26.309 14:06:29 -- scripts/common.sh@352 -- # local d=1 00:11:26.309 14:06:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:26.309 14:06:29 -- scripts/common.sh@354 -- # echo 1 00:11:26.309 14:06:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:26.309 14:06:29 -- scripts/common.sh@365 -- # decimal 2 00:11:26.309 14:06:29 -- scripts/common.sh@352 -- # local d=2 00:11:26.309 14:06:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:26.309 14:06:29 -- scripts/common.sh@354 -- # echo 2 00:11:26.309 14:06:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:26.309 14:06:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:26.309 14:06:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:26.309 14:06:29 -- scripts/common.sh@367 -- # return 0 00:11:26.309 14:06:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:26.309 14:06:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:26.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.309 --rc genhtml_branch_coverage=1 00:11:26.309 --rc genhtml_function_coverage=1 00:11:26.309 --rc genhtml_legend=1 00:11:26.309 --rc geninfo_all_blocks=1 00:11:26.309 --rc geninfo_unexecuted_blocks=1 00:11:26.309 00:11:26.309 ' 00:11:26.309 14:06:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:26.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.309 --rc genhtml_branch_coverage=1 00:11:26.309 --rc genhtml_function_coverage=1 00:11:26.309 --rc genhtml_legend=1 00:11:26.309 --rc geninfo_all_blocks=1 00:11:26.309 --rc geninfo_unexecuted_blocks=1 00:11:26.309 00:11:26.309 ' 00:11:26.309 14:06:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:26.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.309 --rc genhtml_branch_coverage=1 00:11:26.309 --rc genhtml_function_coverage=1 00:11:26.309 --rc genhtml_legend=1 00:11:26.309 --rc geninfo_all_blocks=1 00:11:26.310 --rc geninfo_unexecuted_blocks=1 00:11:26.310 00:11:26.310 ' 00:11:26.310 14:06:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:26.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.310 --rc genhtml_branch_coverage=1 00:11:26.310 --rc genhtml_function_coverage=1 00:11:26.310 --rc genhtml_legend=1 00:11:26.310 --rc geninfo_all_blocks=1 00:11:26.310 --rc geninfo_unexecuted_blocks=1 00:11:26.310 00:11:26.310 ' 00:11:26.310 14:06:29 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:26.310 14:06:29 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:26.310 14:06:29 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:26.310 14:06:29 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:26.310 14:06:29 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:26.310 14:06:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:26.310 14:06:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:26.310 14:06:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:26.310 14:06:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.310 14:06:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.310 14:06:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.310 14:06:29 -- paths/export.sh@5 -- # export PATH 00:11:26.310 14:06:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.310 14:06:29 -- nvme/functions.sh@10 -- # ctrls=() 00:11:26.310 14:06:29 -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:26.310 14:06:29 -- nvme/functions.sh@11 -- # nvmes=() 00:11:26.310 14:06:29 -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:26.310 14:06:29 -- nvme/functions.sh@12 -- # bdfs=() 00:11:26.310 14:06:29 -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:26.310 14:06:29 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:26.310 14:06:29 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:26.310 14:06:29 -- nvme/functions.sh@14 -- # nvme_name= 00:11:26.310 14:06:29 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:26.310 14:06:29 -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:26.881 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:26.881 Waiting for block devices as requested 00:11:26.881 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:11:26.881 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:11:27.142 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:11:27.142 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:11:32.434 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:11:32.434 14:06:34 -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:11:32.434 14:06:34 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:32.434 14:06:34 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:32.434 14:06:34 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:32.434 14:06:34 -- nvme/functions.sh@49 -- # pci=0000:00:09.0 00:11:32.434 14:06:34 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:09.0 00:11:32.434 14:06:34 -- scripts/common.sh@15 -- # local i 00:11:32.434 14:06:34 -- scripts/common.sh@18 -- # [[ =~ 0000:00:09.0 ]] 00:11:32.434 14:06:34 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:32.434 14:06:34 -- scripts/common.sh@24 -- # return 0 00:11:32.434 14:06:34 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:32.434 14:06:34 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:32.434 14:06:34 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:32.434 14:06:34 -- nvme/functions.sh@18 -- # shift 00:11:32.434 14:06:34 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:32.434 14:06:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.434 14:06:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.434 14:06:34 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:32.434 14:06:34 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:32.434 14:06:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.434 14:06:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.434 14:06:34 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:32.434 14:06:34 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:32.434 14:06:34 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:32.434 14:06:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.434 14:06:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.434 14:06:34 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:32.434 14:06:34 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:32.434 14:06:34 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:32.434 14:06:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.434 14:06:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.434 14:06:34 -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:32.434 14:06:34 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12343 "' 00:11:32.434 14:06:34 -- nvme/functions.sh@23 -- # nvme0[sn]='12343 ' 00:11:32.434 14:06:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.434 14:06:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.434 14:06:34 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:32.434 14:06:34 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:32.434 14:06:34 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:32.434 14:06:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.434 14:06:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.434 14:06:34 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:32.434 14:06:34 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:32.434 14:06:34 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:32.434 14:06:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.434 14:06:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.434 14:06:34 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:32.434 14:06:34 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.435 14:06:34 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.435 14:06:34 -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0x2"' 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # nvme0[cmic]=0x2 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.435 14:06:34 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.435 14:06:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.435 14:06:34 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.435 14:06:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.435 14:06:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.435 14:06:34 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.435 14:06:34 -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x88010"' 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x88010 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.435 14:06:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.435 14:06:34 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.435 14:06:34 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.435 14:06:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.435 14:06:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.435 14:06:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.435 14:06:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.435 14:06:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.435 14:06:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.435 14:06:34 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.435 14:06:34 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.435 14:06:34 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.435 14:06:34 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.435 14:06:34 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.435 14:06:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.435 14:06:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.435 14:06:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.435 14:06:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.435 14:06:34 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.435 14:06:34 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.435 14:06:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.435 14:06:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.435 14:06:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.435 14:06:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.435 14:06:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.435 14:06:34 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.435 14:06:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.435 14:06:34 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:32.435 14:06:35 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:32.435 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.435 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.435 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.435 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:32.435 14:06:35 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:32.435 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.435 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.435 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.435 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:32.435 14:06:35 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:32.435 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.435 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.435 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.435 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:32.435 14:06:35 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:32.435 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.435 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.435 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.435 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:32.435 14:06:35 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:32.435 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.435 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.435 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.435 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:32.435 14:06:35 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:32.435 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.435 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.435 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.435 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:32.435 14:06:35 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:32.435 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.435 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.435 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.435 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:32.435 14:06:35 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:32.435 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.435 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.435 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.435 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:32.435 14:06:35 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:32.435 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.435 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.435 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.435 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:32.435 14:06:35 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.436 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.436 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.436 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="1"' 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=1 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.436 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.436 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.436 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.436 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.436 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.436 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.436 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.436 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.436 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.436 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.436 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.436 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.436 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.436 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.436 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.436 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.436 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.436 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.436 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.436 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.436 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.436 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.436 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.436 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.436 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.436 14:06:35 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.436 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.436 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.436 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.436 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.436 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.436 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.436 14:06:35 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.436 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.436 14:06:35 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:32.436 14:06:35 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.436 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.436 14:06:35 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:32.436 14:06:35 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:32.436 14:06:35 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:32.436 14:06:35 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:09.0 00:11:32.436 14:06:35 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:32.436 14:06:35 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:32.436 14:06:35 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:32.436 14:06:35 -- nvme/functions.sh@49 -- # pci=0000:00:08.0 00:11:32.436 14:06:35 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:08.0 00:11:32.436 14:06:35 -- scripts/common.sh@15 -- # local i 00:11:32.436 14:06:35 -- scripts/common.sh@18 -- # [[ =~ 0000:00:08.0 ]] 00:11:32.436 14:06:35 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:32.436 14:06:35 -- scripts/common.sh@24 -- # return 0 00:11:32.436 14:06:35 -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:32.437 14:06:35 -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:32.437 14:06:35 -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:32.437 14:06:35 -- nvme/functions.sh@18 -- # shift 00:11:32.437 14:06:35 -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.437 14:06:35 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:32.437 14:06:35 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.437 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.437 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.437 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12342 "' 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # nvme1[sn]='12342 ' 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.437 14:06:35 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.437 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.437 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.437 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.437 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.437 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.437 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.437 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.437 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.437 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.437 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.437 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.437 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.437 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.437 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.437 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.437 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.437 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.437 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.437 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.437 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.437 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.437 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.437 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.437 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.437 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.437 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.437 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.437 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.437 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.437 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.437 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.437 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.437 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.437 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.437 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:32.437 14:06:35 -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.438 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.438 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.438 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.438 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.438 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.438 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.438 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.438 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.438 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.438 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.438 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.438 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.438 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.438 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.438 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.438 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.438 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.438 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.438 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.438 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.438 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.438 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.438 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.438 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.438 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.438 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.438 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.438 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.438 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.438 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.438 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.438 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.438 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.438 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.438 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.438 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:32.438 14:06:35 -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:32.438 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.439 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.439 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.439 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.439 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.439 14:06:35 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12342 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.439 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.439 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.439 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.439 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.439 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.439 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.439 14:06:35 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.439 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.439 14:06:35 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.439 14:06:35 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:32.439 14:06:35 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:32.439 14:06:35 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:32.439 14:06:35 -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:32.439 14:06:35 -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:32.439 14:06:35 -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:32.439 14:06:35 -- nvme/functions.sh@18 -- # shift 00:11:32.439 14:06:35 -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.439 14:06:35 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:32.439 14:06:35 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.439 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x100000"' 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x100000 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.439 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x100000"' 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x100000 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.439 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x100000"' 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x100000 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.439 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.439 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.439 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x4"' 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x4 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.439 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.439 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.439 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.439 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.439 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.439 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.439 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.439 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.439 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.439 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.439 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.439 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.439 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.439 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.439 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.439 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.439 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.439 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.439 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:32.439 14:06:35 -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.439 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.440 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.440 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.440 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.440 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.440 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.440 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.440 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.440 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.440 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.440 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.440 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.440 14:06:35 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.440 14:06:35 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.440 14:06:35 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.440 14:06:35 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.440 14:06:35 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.440 14:06:35 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.440 14:06:35 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.440 14:06:35 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.440 14:06:35 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:32.440 14:06:35 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:32.440 14:06:35 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n2 ]] 00:11:32.440 14:06:35 -- nvme/functions.sh@56 -- # ns_dev=nvme1n2 00:11:32.440 14:06:35 -- nvme/functions.sh@57 -- # nvme_get nvme1n2 id-ns /dev/nvme1n2 00:11:32.440 14:06:35 -- nvme/functions.sh@17 -- # local ref=nvme1n2 reg val 00:11:32.440 14:06:35 -- nvme/functions.sh@18 -- # shift 00:11:32.440 14:06:35 -- nvme/functions.sh@20 -- # local -gA 'nvme1n2=()' 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.440 14:06:35 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n2 00:11:32.440 14:06:35 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.440 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsze]="0x100000"' 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # nvme1n2[nsze]=0x100000 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.440 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[ncap]="0x100000"' 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # nvme1n2[ncap]=0x100000 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.440 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nuse]="0x100000"' 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # nvme1n2[nuse]=0x100000 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.440 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsfeat]="0x14"' 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # nvme1n2[nsfeat]=0x14 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.440 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nlbaf]="7"' 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # nvme1n2[nlbaf]=7 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.440 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[flbas]="0x4"' 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # nvme1n2[flbas]=0x4 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.440 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mc]="0x3"' 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # nvme1n2[mc]=0x3 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.440 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dpc]="0x1f"' 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # nvme1n2[dpc]=0x1f 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.440 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dps]="0"' 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # nvme1n2[dps]=0 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.440 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nmic]="0"' 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # nvme1n2[nmic]=0 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.440 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[rescap]="0"' 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # nvme1n2[rescap]=0 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.440 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[fpi]="0"' 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # nvme1n2[fpi]=0 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.440 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dlfeat]="1"' 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # nvme1n2[dlfeat]=1 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.440 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawun]="0"' 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # nvme1n2[nawun]=0 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.440 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawupf]="0"' 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # nvme1n2[nawupf]=0 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.440 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nacwu]="0"' 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # nvme1n2[nacwu]=0 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.440 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabsn]="0"' 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # nvme1n2[nabsn]=0 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.440 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabo]="0"' 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # nvme1n2[nabo]=0 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.440 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabspf]="0"' 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # nvme1n2[nabspf]=0 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.440 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[noiob]="0"' 00:11:32.440 14:06:35 -- nvme/functions.sh@23 -- # nvme1n2[noiob]=0 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.440 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.440 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmcap]="0"' 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # nvme1n2[nvmcap]=0 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.441 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwg]="0"' 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # nvme1n2[npwg]=0 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.441 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwa]="0"' 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # nvme1n2[npwa]=0 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.441 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npdg]="0"' 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # nvme1n2[npdg]=0 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.441 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npda]="0"' 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # nvme1n2[npda]=0 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.441 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nows]="0"' 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # nvme1n2[nows]=0 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.441 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mssrl]="128"' 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # nvme1n2[mssrl]=128 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.441 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mcl]="128"' 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # nvme1n2[mcl]=128 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.441 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[msrc]="127"' 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # nvme1n2[msrc]=127 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.441 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nulbaf]="0"' 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # nvme1n2[nulbaf]=0 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.441 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[anagrpid]="0"' 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # nvme1n2[anagrpid]=0 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.441 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsattr]="0"' 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # nvme1n2[nsattr]=0 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.441 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmsetid]="0"' 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # nvme1n2[nvmsetid]=0 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.441 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[endgid]="0"' 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # nvme1n2[endgid]=0 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.441 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nguid]="00000000000000000000000000000000"' 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # nvme1n2[nguid]=00000000000000000000000000000000 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.441 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[eui64]="0000000000000000"' 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # nvme1n2[eui64]=0000000000000000 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.441 14:06:35 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # nvme1n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.441 14:06:35 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # nvme1n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.441 14:06:35 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # nvme1n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.441 14:06:35 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # nvme1n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.441 14:06:35 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # nvme1n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.441 14:06:35 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # nvme1n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.441 14:06:35 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # nvme1n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.441 14:06:35 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # nvme1n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.441 14:06:35 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n2 00:11:32.441 14:06:35 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:32.441 14:06:35 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n3 ]] 00:11:32.441 14:06:35 -- nvme/functions.sh@56 -- # ns_dev=nvme1n3 00:11:32.441 14:06:35 -- nvme/functions.sh@57 -- # nvme_get nvme1n3 id-ns /dev/nvme1n3 00:11:32.441 14:06:35 -- nvme/functions.sh@17 -- # local ref=nvme1n3 reg val 00:11:32.441 14:06:35 -- nvme/functions.sh@18 -- # shift 00:11:32.441 14:06:35 -- nvme/functions.sh@20 -- # local -gA 'nvme1n3=()' 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.441 14:06:35 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n3 00:11:32.441 14:06:35 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.441 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsze]="0x100000"' 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # nvme1n3[nsze]=0x100000 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.441 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[ncap]="0x100000"' 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # nvme1n3[ncap]=0x100000 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.441 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nuse]="0x100000"' 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # nvme1n3[nuse]=0x100000 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.441 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsfeat]="0x14"' 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # nvme1n3[nsfeat]=0x14 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.441 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nlbaf]="7"' 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # nvme1n3[nlbaf]=7 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.441 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[flbas]="0x4"' 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # nvme1n3[flbas]=0x4 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.441 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mc]="0x3"' 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # nvme1n3[mc]=0x3 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.441 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dpc]="0x1f"' 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # nvme1n3[dpc]=0x1f 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.441 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dps]="0"' 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # nvme1n3[dps]=0 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.441 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nmic]="0"' 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # nvme1n3[nmic]=0 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.441 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[rescap]="0"' 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # nvme1n3[rescap]=0 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.441 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[fpi]="0"' 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # nvme1n3[fpi]=0 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.441 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dlfeat]="1"' 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # nvme1n3[dlfeat]=1 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.441 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawun]="0"' 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # nvme1n3[nawun]=0 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.441 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawupf]="0"' 00:11:32.441 14:06:35 -- nvme/functions.sh@23 -- # nvme1n3[nawupf]=0 00:11:32.441 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.442 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nacwu]="0"' 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # nvme1n3[nacwu]=0 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.442 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabsn]="0"' 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # nvme1n3[nabsn]=0 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.442 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabo]="0"' 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # nvme1n3[nabo]=0 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.442 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabspf]="0"' 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # nvme1n3[nabspf]=0 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.442 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[noiob]="0"' 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # nvme1n3[noiob]=0 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.442 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmcap]="0"' 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # nvme1n3[nvmcap]=0 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.442 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwg]="0"' 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # nvme1n3[npwg]=0 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.442 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwa]="0"' 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # nvme1n3[npwa]=0 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.442 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npdg]="0"' 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # nvme1n3[npdg]=0 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.442 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npda]="0"' 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # nvme1n3[npda]=0 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.442 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nows]="0"' 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # nvme1n3[nows]=0 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.442 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mssrl]="128"' 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # nvme1n3[mssrl]=128 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.442 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mcl]="128"' 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # nvme1n3[mcl]=128 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.442 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[msrc]="127"' 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # nvme1n3[msrc]=127 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.442 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nulbaf]="0"' 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # nvme1n3[nulbaf]=0 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.442 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[anagrpid]="0"' 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # nvme1n3[anagrpid]=0 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.442 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsattr]="0"' 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # nvme1n3[nsattr]=0 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.442 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmsetid]="0"' 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # nvme1n3[nvmsetid]=0 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.442 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[endgid]="0"' 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # nvme1n3[endgid]=0 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.442 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nguid]="00000000000000000000000000000000"' 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # nvme1n3[nguid]=00000000000000000000000000000000 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.442 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[eui64]="0000000000000000"' 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # nvme1n3[eui64]=0000000000000000 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.442 14:06:35 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # nvme1n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.442 14:06:35 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # nvme1n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.442 14:06:35 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # nvme1n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.442 14:06:35 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # nvme1n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.442 14:06:35 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # nvme1n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.442 14:06:35 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # nvme1n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.442 14:06:35 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # nvme1n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.442 14:06:35 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # nvme1n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.442 14:06:35 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n3 00:11:32.442 14:06:35 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:32.442 14:06:35 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:32.442 14:06:35 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:08.0 00:11:32.442 14:06:35 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:32.442 14:06:35 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:32.442 14:06:35 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:32.442 14:06:35 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:11:32.442 14:06:35 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:11:32.442 14:06:35 -- scripts/common.sh@15 -- # local i 00:11:32.442 14:06:35 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:11:32.442 14:06:35 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:32.442 14:06:35 -- scripts/common.sh@24 -- # return 0 00:11:32.442 14:06:35 -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:32.442 14:06:35 -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:32.442 14:06:35 -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:32.442 14:06:35 -- nvme/functions.sh@18 -- # shift 00:11:32.442 14:06:35 -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.442 14:06:35 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:32.442 14:06:35 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.442 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.442 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.442 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12340 "' 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # nvme2[sn]='12340 ' 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.442 14:06:35 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:32.442 14:06:35 -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:32.442 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.443 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.443 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.443 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.443 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.443 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.443 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.443 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.443 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.443 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.443 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.443 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.443 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.443 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.443 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.443 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.443 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.443 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.443 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.443 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.443 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.443 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.443 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.443 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.443 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.443 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.443 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.443 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.443 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.443 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.443 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.443 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.443 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.443 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.443 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.443 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.443 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.443 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.443 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.443 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.443 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.443 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.443 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.443 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.443 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:32.443 14:06:35 -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:32.443 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.444 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.444 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.444 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.444 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.444 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.444 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.444 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.444 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.444 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.444 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.444 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.444 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.444 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.444 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.444 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.444 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.444 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.444 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.444 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.444 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.444 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.444 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.444 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.444 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.444 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.444 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.444 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.444 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.444 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.444 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.444 14:06:35 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12340 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.444 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.444 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.444 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.444 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.444 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.444 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.444 14:06:35 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.444 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.444 14:06:35 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.444 14:06:35 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:32.444 14:06:35 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:32.444 14:06:35 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:32.444 14:06:35 -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:32.444 14:06:35 -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:32.444 14:06:35 -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:32.444 14:06:35 -- nvme/functions.sh@18 -- # shift 00:11:32.444 14:06:35 -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.444 14:06:35 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:32.444 14:06:35 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.444 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x17a17a"' 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x17a17a 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.444 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x17a17a"' 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x17a17a 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.444 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x17a17a"' 00:11:32.444 14:06:35 -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x17a17a 00:11:32.444 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.445 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.445 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.445 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x7"' 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x7 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.445 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.445 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.445 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.445 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.445 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.445 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.445 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.445 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.445 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.445 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.445 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.445 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.445 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.445 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.445 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.445 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.445 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.445 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.445 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.445 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.445 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.445 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.445 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.445 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.445 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.445 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.445 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.445 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.445 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.445 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.445 14:06:35 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.445 14:06:35 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.445 14:06:35 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.445 14:06:35 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.445 14:06:35 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.445 14:06:35 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.445 14:06:35 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.445 14:06:35 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:32.445 14:06:35 -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.445 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.446 14:06:35 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:32.446 14:06:35 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:32.446 14:06:35 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:32.446 14:06:35 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:11:32.446 14:06:35 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:32.446 14:06:35 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:32.446 14:06:35 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:32.446 14:06:35 -- nvme/functions.sh@49 -- # pci=0000:00:07.0 00:11:32.446 14:06:35 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:07.0 00:11:32.446 14:06:35 -- scripts/common.sh@15 -- # local i 00:11:32.446 14:06:35 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:11:32.446 14:06:35 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:32.446 14:06:35 -- scripts/common.sh@24 -- # return 0 00:11:32.446 14:06:35 -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:32.446 14:06:35 -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:32.446 14:06:35 -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:32.446 14:06:35 -- nvme/functions.sh@18 -- # shift 00:11:32.446 14:06:35 -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.446 14:06:35 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:32.446 14:06:35 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.446 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.446 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.446 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12341 "' 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # nvme3[sn]='12341 ' 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.446 14:06:35 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.446 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.446 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.446 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.446 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0"' 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # nvme3[cmic]=0 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.446 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.446 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.446 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.446 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.446 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.446 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.446 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x8000"' 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x8000 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.446 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.446 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.446 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.446 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.446 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.446 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.446 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.446 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.446 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.446 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.446 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.446 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.446 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.446 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.446 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.446 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.446 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.446 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:32.446 14:06:35 -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.447 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.447 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.447 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.447 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.447 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.447 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.447 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.447 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.447 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.447 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.447 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.447 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.447 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.447 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.447 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.447 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.447 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.447 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.447 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.447 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.447 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="0"' 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # nvme3[endgidmax]=0 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.447 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.447 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.447 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.447 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.447 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.447 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.447 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.447 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.447 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.447 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.447 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.447 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.447 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.447 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.447 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.447 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.447 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.447 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.447 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.447 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.447 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.447 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:32.447 14:06:35 -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.447 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.448 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.448 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.448 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.448 14:06:35 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:12341 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.448 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.448 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.448 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.448 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.448 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.448 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.448 14:06:35 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.448 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.448 14:06:35 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.448 14:06:35 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:32.448 14:06:35 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:32.448 14:06:35 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme3/nvme3n1 ]] 00:11:32.448 14:06:35 -- nvme/functions.sh@56 -- # ns_dev=nvme3n1 00:11:32.448 14:06:35 -- nvme/functions.sh@57 -- # nvme_get nvme3n1 id-ns /dev/nvme3n1 00:11:32.448 14:06:35 -- nvme/functions.sh@17 -- # local ref=nvme3n1 reg val 00:11:32.448 14:06:35 -- nvme/functions.sh@18 -- # shift 00:11:32.448 14:06:35 -- nvme/functions.sh@20 -- # local -gA 'nvme3n1=()' 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.448 14:06:35 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme3n1 00:11:32.448 14:06:35 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.448 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsze]="0x140000"' 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # nvme3n1[nsze]=0x140000 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.448 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[ncap]="0x140000"' 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # nvme3n1[ncap]=0x140000 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.448 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nuse]="0x140000"' 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # nvme3n1[nuse]=0x140000 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.448 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsfeat]="0x14"' 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # nvme3n1[nsfeat]=0x14 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.448 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nlbaf]="7"' 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # nvme3n1[nlbaf]=7 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.448 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[flbas]="0x4"' 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # nvme3n1[flbas]=0x4 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.448 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mc]="0x3"' 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # nvme3n1[mc]=0x3 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.448 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dpc]="0x1f"' 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # nvme3n1[dpc]=0x1f 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.448 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dps]="0"' 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # nvme3n1[dps]=0 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.448 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nmic]="0"' 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # nvme3n1[nmic]=0 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.448 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[rescap]="0"' 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # nvme3n1[rescap]=0 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.448 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[fpi]="0"' 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # nvme3n1[fpi]=0 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.448 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dlfeat]="1"' 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # nvme3n1[dlfeat]=1 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.448 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawun]="0"' 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # nvme3n1[nawun]=0 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.448 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawupf]="0"' 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # nvme3n1[nawupf]=0 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.448 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nacwu]="0"' 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # nvme3n1[nacwu]=0 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.448 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabsn]="0"' 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # nvme3n1[nabsn]=0 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.448 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabo]="0"' 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # nvme3n1[nabo]=0 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.448 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabspf]="0"' 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # nvme3n1[nabspf]=0 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.448 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[noiob]="0"' 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # nvme3n1[noiob]=0 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.448 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmcap]="0"' 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # nvme3n1[nvmcap]=0 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.448 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwg]="0"' 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # nvme3n1[npwg]=0 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.448 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwa]="0"' 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # nvme3n1[npwa]=0 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.448 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npdg]="0"' 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # nvme3n1[npdg]=0 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.448 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npda]="0"' 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # nvme3n1[npda]=0 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.448 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nows]="0"' 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # nvme3n1[nows]=0 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.448 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.448 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mssrl]="128"' 00:11:32.448 14:06:35 -- nvme/functions.sh@23 -- # nvme3n1[mssrl]=128 00:11:32.449 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.449 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.449 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:32.449 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mcl]="128"' 00:11:32.449 14:06:35 -- nvme/functions.sh@23 -- # nvme3n1[mcl]=128 00:11:32.449 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.449 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.449 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:32.449 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[msrc]="127"' 00:11:32.449 14:06:35 -- nvme/functions.sh@23 -- # nvme3n1[msrc]=127 00:11:32.449 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.449 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.449 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.449 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nulbaf]="0"' 00:11:32.449 14:06:35 -- nvme/functions.sh@23 -- # nvme3n1[nulbaf]=0 00:11:32.449 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.449 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.449 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.449 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[anagrpid]="0"' 00:11:32.449 14:06:35 -- nvme/functions.sh@23 -- # nvme3n1[anagrpid]=0 00:11:32.449 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.449 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.449 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.449 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsattr]="0"' 00:11:32.449 14:06:35 -- nvme/functions.sh@23 -- # nvme3n1[nsattr]=0 00:11:32.449 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.449 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.449 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.449 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmsetid]="0"' 00:11:32.449 14:06:35 -- nvme/functions.sh@23 -- # nvme3n1[nvmsetid]=0 00:11:32.449 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.449 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.449 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:32.449 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[endgid]="0"' 00:11:32.449 14:06:35 -- nvme/functions.sh@23 -- # nvme3n1[endgid]=0 00:11:32.449 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.449 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.449 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:32.449 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nguid]="00000000000000000000000000000000"' 00:11:32.449 14:06:35 -- nvme/functions.sh@23 -- # nvme3n1[nguid]=00000000000000000000000000000000 00:11:32.449 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.449 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.449 14:06:35 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:32.449 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[eui64]="0000000000000000"' 00:11:32.449 14:06:35 -- nvme/functions.sh@23 -- # nvme3n1[eui64]=0000000000000000 00:11:32.449 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.449 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.449 14:06:35 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:32.449 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:32.449 14:06:35 -- nvme/functions.sh@23 -- # nvme3n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:32.449 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.449 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.449 14:06:35 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:32.449 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:32.449 14:06:35 -- nvme/functions.sh@23 -- # nvme3n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:32.449 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.449 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.449 14:06:35 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:32.449 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:32.449 14:06:35 -- nvme/functions.sh@23 -- # nvme3n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:32.449 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.449 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.449 14:06:35 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:32.449 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:32.449 14:06:35 -- nvme/functions.sh@23 -- # nvme3n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:32.449 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.449 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.449 14:06:35 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:32.449 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:32.449 14:06:35 -- nvme/functions.sh@23 -- # nvme3n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:32.449 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.449 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.449 14:06:35 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:32.449 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:32.449 14:06:35 -- nvme/functions.sh@23 -- # nvme3n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:32.449 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.449 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.449 14:06:35 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:32.449 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:32.449 14:06:35 -- nvme/functions.sh@23 -- # nvme3n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:32.449 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.449 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.449 14:06:35 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:32.449 14:06:35 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:32.449 14:06:35 -- nvme/functions.sh@23 -- # nvme3n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:32.449 14:06:35 -- nvme/functions.sh@21 -- # IFS=: 00:11:32.449 14:06:35 -- nvme/functions.sh@21 -- # read -r reg val 00:11:32.449 14:06:35 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme3n1 00:11:32.449 14:06:35 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:32.449 14:06:35 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:32.449 14:06:35 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:07.0 00:11:32.449 14:06:35 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:32.449 14:06:35 -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:32.449 14:06:35 -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:11:32.449 14:06:35 -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:11:32.449 14:06:35 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:32.449 14:06:35 -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:11:32.449 14:06:35 -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:11:32.449 14:06:35 -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:11:32.449 14:06:35 -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:11:32.449 14:06:35 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:11:32.449 14:06:35 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:32.449 14:06:35 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:11:32.449 14:06:35 -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:11:32.449 14:06:35 -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:11:32.449 14:06:35 -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:11:32.449 14:06:35 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:11:32.449 14:06:35 -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:11:32.449 14:06:35 -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:32.449 14:06:35 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:32.449 14:06:35 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:32.449 14:06:35 -- nvme/functions.sh@76 -- # echo 0x8000 00:11:32.449 14:06:35 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:32.449 14:06:35 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:32.449 14:06:35 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:32.449 14:06:35 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:11:32.449 14:06:35 -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:11:32.449 14:06:35 -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:11:32.449 14:06:35 -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:11:32.449 14:06:35 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:11:32.449 14:06:35 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:11:32.449 14:06:35 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:32.449 14:06:35 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:32.449 14:06:35 -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:11:32.449 14:06:35 -- nvme/functions.sh@76 -- # echo 0x88010 00:11:32.449 14:06:35 -- nvme/functions.sh@176 -- # ctratt=0x88010 00:11:32.449 14:06:35 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:32.449 14:06:35 -- nvme/functions.sh@197 -- # echo nvme0 00:11:32.449 14:06:35 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:32.449 14:06:35 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:11:32.449 14:06:35 -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:11:32.449 14:06:35 -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:11:32.449 14:06:35 -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:11:32.449 14:06:35 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:11:32.449 14:06:35 -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:11:32.449 14:06:35 -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:32.449 14:06:35 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:32.449 14:06:35 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:32.449 14:06:35 -- nvme/functions.sh@76 -- # echo 0x8000 00:11:32.449 14:06:35 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:32.449 14:06:35 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:32.449 14:06:35 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:32.449 14:06:35 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:11:32.449 14:06:35 -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:11:32.449 14:06:35 -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:11:32.449 14:06:35 -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:11:32.449 14:06:35 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:11:32.449 14:06:35 -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:11:32.449 14:06:35 -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:32.449 14:06:35 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:32.449 14:06:35 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:32.449 14:06:35 -- nvme/functions.sh@76 -- # echo 0x8000 00:11:32.449 14:06:35 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:32.449 14:06:35 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:32.449 14:06:35 -- nvme/functions.sh@204 -- # trap - ERR 00:11:32.449 14:06:35 -- nvme/functions.sh@204 -- # print_backtrace 00:11:32.449 14:06:35 -- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]] 00:11:32.449 14:06:35 -- common/autotest_common.sh@1142 -- # return 0 00:11:32.449 14:06:35 -- nvme/functions.sh@204 -- # trap - ERR 00:11:32.449 14:06:35 -- nvme/functions.sh@204 -- # print_backtrace 00:11:32.449 14:06:35 -- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]] 00:11:32.449 14:06:35 -- common/autotest_common.sh@1142 -- # return 0 00:11:32.449 14:06:35 -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:11:32.449 14:06:35 -- nvme/functions.sh@206 -- # echo nvme0 00:11:32.449 14:06:35 -- nvme/functions.sh@207 -- # return 0 00:11:32.449 14:06:35 -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme0 00:11:32.449 14:06:35 -- nvme/nvme_fdp.sh@13 -- # bdf=0000:00:09.0 00:11:32.449 14:06:35 -- nvme/nvme_fdp.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:33.386 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:33.645 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:11:33.645 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:11:33.645 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:11:33.645 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:11:33.645 14:06:36 -- nvme/nvme_fdp.sh@17 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:09.0' 00:11:33.645 14:06:36 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:33.645 14:06:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:33.645 14:06:36 -- common/autotest_common.sh@10 -- # set +x 00:11:33.645 ************************************ 00:11:33.645 START TEST nvme_flexible_data_placement 00:11:33.645 ************************************ 00:11:33.645 14:06:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:09.0' 00:11:33.904 Initializing NVMe Controllers 00:11:33.904 Attaching to 0000:00:09.0 00:11:33.904 Controller supports FDP Attached to 0000:00:09.0 00:11:33.904 Namespace ID: 1 Endurance Group ID: 1 00:11:33.904 Initialization complete. 00:11:33.904 00:11:33.904 ================================== 00:11:33.904 == FDP tests for Namespace: #01 == 00:11:33.904 ================================== 00:11:33.904 00:11:33.904 Get Feature: FDP: 00:11:33.904 ================= 00:11:33.904 Enabled: Yes 00:11:33.904 FDP configuration Index: 0 00:11:33.904 00:11:33.904 FDP configurations log page 00:11:33.904 =========================== 00:11:33.904 Number of FDP configurations: 1 00:11:33.904 Version: 0 00:11:33.904 Size: 112 00:11:33.904 FDP Configuration Descriptor: 0 00:11:33.904 Descriptor Size: 96 00:11:33.904 Reclaim Group Identifier format: 2 00:11:33.904 FDP Volatile Write Cache: Not Present 00:11:33.904 FDP Configuration: Valid 00:11:33.904 Vendor Specific Size: 0 00:11:33.904 Number of Reclaim Groups: 2 00:11:33.904 Number of Recalim Unit Handles: 8 00:11:33.904 Max Placement Identifiers: 128 00:11:33.904 Number of Namespaces Suppprted: 256 00:11:33.904 Reclaim unit Nominal Size: 6000000 bytes 00:11:33.904 Estimated Reclaim Unit Time Limit: Not Reported 00:11:33.904 RUH Desc #000: RUH Type: Initially Isolated 00:11:33.905 RUH Desc #001: RUH Type: Initially Isolated 00:11:33.905 RUH Desc #002: RUH Type: Initially Isolated 00:11:33.905 RUH Desc #003: RUH Type: Initially Isolated 00:11:33.905 RUH Desc #004: RUH Type: Initially Isolated 00:11:33.905 RUH Desc #005: RUH Type: Initially Isolated 00:11:33.905 RUH Desc #006: RUH Type: Initially Isolated 00:11:33.905 RUH Desc #007: RUH Type: Initially Isolated 00:11:33.905 00:11:33.905 FDP reclaim unit handle usage log page 00:11:33.905 ====================================== 00:11:33.905 Number of Reclaim Unit Handles: 8 00:11:33.905 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:33.905 RUH Usage Desc #001: RUH Attributes: Unused 00:11:33.905 RUH Usage Desc #002: RUH Attributes: Unused 00:11:33.905 RUH Usage Desc #003: RUH Attributes: Unused 00:11:33.905 RUH Usage Desc #004: RUH Attributes: Unused 00:11:33.905 RUH Usage Desc #005: RUH Attributes: Unused 00:11:33.905 RUH Usage Desc #006: RUH Attributes: Unused 00:11:33.905 RUH Usage Desc #007: RUH Attributes: Unused 00:11:33.905 00:11:33.905 FDP statistics log page 00:11:33.905 ======================= 00:11:33.905 Host bytes with metadata written: 948633600 00:11:33.905 Media bytes with metadata written: 948961280 00:11:33.905 Media bytes erased: 0 00:11:33.905 00:11:33.905 FDP Reclaim unit handle status 00:11:33.905 ============================== 00:11:33.905 Number of RUHS descriptors: 2 00:11:33.905 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000003750 00:11:33.905 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:11:33.905 00:11:33.905 FDP write on placement id: 0 success 00:11:33.905 00:11:33.905 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:11:33.905 00:11:33.905 IO mgmt send: RUH update for Placement ID: #0 Success 00:11:33.905 00:11:33.905 Get Feature: FDP Events for Placement handle: #0 00:11:33.905 ======================== 00:11:33.905 Number of FDP Events: 6 00:11:33.905 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:11:33.905 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:11:33.905 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:11:33.905 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:11:33.905 FDP Event: #4 Type: Media Reallocated Enabled: No 00:11:33.905 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:11:33.905 00:11:33.905 FDP events log page 00:11:33.905 =================== 00:11:33.905 Number of FDP events: 1 00:11:33.905 FDP Event #0: 00:11:33.905 Event Type: RU Not Written to Capacity 00:11:33.905 Placement Identifier: Valid 00:11:33.905 NSID: Valid 00:11:33.905 Location: Valid 00:11:33.905 Placement Identifier: 0 00:11:33.905 Event Timestamp: 13 00:11:33.905 Namespace Identifier: 1 00:11:33.905 Reclaim Group Identifier: 0 00:11:33.905 Reclaim Unit Handle Identifier: 0 00:11:33.905 00:11:33.905 FDP test passed 00:11:33.905 00:11:33.905 real 0m0.244s 00:11:33.905 user 0m0.065s 00:11:33.905 sys 0m0.077s 00:11:33.905 14:06:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:33.905 ************************************ 00:11:33.905 END TEST nvme_flexible_data_placement 00:11:33.905 ************************************ 00:11:33.905 14:06:36 -- common/autotest_common.sh@10 -- # set +x 00:11:33.905 00:11:33.905 real 0m7.876s 00:11:33.905 user 0m1.049s 00:11:33.905 sys 0m1.654s 00:11:33.905 14:06:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:33.905 ************************************ 00:11:33.905 END TEST nvme_fdp 00:11:33.905 ************************************ 00:11:33.905 14:06:36 -- common/autotest_common.sh@10 -- # set +x 00:11:34.165 14:06:36 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:11:34.165 14:06:36 -- spdk/autotest.sh@233 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:34.165 14:06:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:34.165 14:06:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:34.165 14:06:36 -- common/autotest_common.sh@10 -- # set +x 00:11:34.165 ************************************ 00:11:34.165 START TEST nvme_rpc 00:11:34.165 ************************************ 00:11:34.165 14:06:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:34.165 * Looking for test storage... 00:11:34.165 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:34.165 14:06:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:34.165 14:06:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:34.165 14:06:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:34.165 14:06:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:34.165 14:06:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:34.165 14:06:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:34.165 14:06:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:34.165 14:06:37 -- scripts/common.sh@335 -- # IFS=.-: 00:11:34.165 14:06:37 -- scripts/common.sh@335 -- # read -ra ver1 00:11:34.165 14:06:37 -- scripts/common.sh@336 -- # IFS=.-: 00:11:34.165 14:06:37 -- scripts/common.sh@336 -- # read -ra ver2 00:11:34.165 14:06:37 -- scripts/common.sh@337 -- # local 'op=<' 00:11:34.165 14:06:37 -- scripts/common.sh@339 -- # ver1_l=2 00:11:34.165 14:06:37 -- scripts/common.sh@340 -- # ver2_l=1 00:11:34.165 14:06:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:34.165 14:06:37 -- scripts/common.sh@343 -- # case "$op" in 00:11:34.165 14:06:37 -- scripts/common.sh@344 -- # : 1 00:11:34.165 14:06:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:34.165 14:06:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:34.165 14:06:37 -- scripts/common.sh@364 -- # decimal 1 00:11:34.165 14:06:37 -- scripts/common.sh@352 -- # local d=1 00:11:34.165 14:06:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:34.165 14:06:37 -- scripts/common.sh@354 -- # echo 1 00:11:34.165 14:06:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:34.165 14:06:37 -- scripts/common.sh@365 -- # decimal 2 00:11:34.165 14:06:37 -- scripts/common.sh@352 -- # local d=2 00:11:34.165 14:06:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:34.165 14:06:37 -- scripts/common.sh@354 -- # echo 2 00:11:34.165 14:06:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:34.165 14:06:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:34.165 14:06:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:34.165 14:06:37 -- scripts/common.sh@367 -- # return 0 00:11:34.165 14:06:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:34.166 14:06:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:34.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.166 --rc genhtml_branch_coverage=1 00:11:34.166 --rc genhtml_function_coverage=1 00:11:34.166 --rc genhtml_legend=1 00:11:34.166 --rc geninfo_all_blocks=1 00:11:34.166 --rc geninfo_unexecuted_blocks=1 00:11:34.166 00:11:34.166 ' 00:11:34.166 14:06:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:34.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.166 --rc genhtml_branch_coverage=1 00:11:34.166 --rc genhtml_function_coverage=1 00:11:34.166 --rc genhtml_legend=1 00:11:34.166 --rc geninfo_all_blocks=1 00:11:34.166 --rc geninfo_unexecuted_blocks=1 00:11:34.166 00:11:34.166 ' 00:11:34.166 14:06:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:34.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.166 --rc genhtml_branch_coverage=1 00:11:34.166 --rc genhtml_function_coverage=1 00:11:34.166 --rc genhtml_legend=1 00:11:34.166 --rc geninfo_all_blocks=1 00:11:34.166 --rc geninfo_unexecuted_blocks=1 00:11:34.166 00:11:34.166 ' 00:11:34.166 14:06:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:34.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.166 --rc genhtml_branch_coverage=1 00:11:34.166 --rc genhtml_function_coverage=1 00:11:34.166 --rc genhtml_legend=1 00:11:34.166 --rc geninfo_all_blocks=1 00:11:34.166 --rc geninfo_unexecuted_blocks=1 00:11:34.166 00:11:34.166 ' 00:11:34.166 14:06:37 -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:34.166 14:06:37 -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:11:34.166 14:06:37 -- common/autotest_common.sh@1519 -- # bdfs=() 00:11:34.166 14:06:37 -- common/autotest_common.sh@1519 -- # local bdfs 00:11:34.166 14:06:37 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:11:34.166 14:06:37 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:11:34.166 14:06:37 -- common/autotest_common.sh@1508 -- # bdfs=() 00:11:34.166 14:06:37 -- common/autotest_common.sh@1508 -- # local bdfs 00:11:34.166 14:06:37 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:34.166 14:06:37 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:34.166 14:06:37 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:11:34.425 14:06:37 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:11:34.425 14:06:37 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:11:34.425 14:06:37 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:11:34.425 14:06:37 -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:06.0 00:11:34.425 14:06:37 -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=66563 00:11:34.425 14:06:37 -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:11:34.426 14:06:37 -- nvme/nvme_rpc.sh@19 -- # waitforlisten 66563 00:11:34.426 14:06:37 -- common/autotest_common.sh@829 -- # '[' -z 66563 ']' 00:11:34.426 14:06:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.426 14:06:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:34.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.426 14:06:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.426 14:06:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:34.426 14:06:37 -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:34.426 14:06:37 -- common/autotest_common.sh@10 -- # set +x 00:11:34.426 [2024-12-08 14:06:37.203034] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:34.426 [2024-12-08 14:06:37.203181] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66563 ] 00:11:34.720 [2024-12-08 14:06:37.359575] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:34.720 [2024-12-08 14:06:37.628216] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:34.720 [2024-12-08 14:06:37.628742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:34.720 [2024-12-08 14:06:37.628796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.103 14:06:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:36.103 14:06:38 -- common/autotest_common.sh@862 -- # return 0 00:11:36.103 14:06:38 -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:11:36.103 Nvme0n1 00:11:36.103 14:06:38 -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:11:36.103 14:06:38 -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:11:36.363 request: 00:11:36.363 { 00:11:36.363 "filename": "non_existing_file", 00:11:36.363 "bdev_name": "Nvme0n1", 00:11:36.363 "method": "bdev_nvme_apply_firmware", 00:11:36.363 "req_id": 1 00:11:36.363 } 00:11:36.363 Got JSON-RPC error response 00:11:36.363 response: 00:11:36.363 { 00:11:36.363 "code": -32603, 00:11:36.363 "message": "open file failed." 00:11:36.363 } 00:11:36.363 14:06:39 -- nvme/nvme_rpc.sh@32 -- # rv=1 00:11:36.363 14:06:39 -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:11:36.363 14:06:39 -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:11:36.624 14:06:39 -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:36.624 14:06:39 -- nvme/nvme_rpc.sh@40 -- # killprocess 66563 00:11:36.624 14:06:39 -- common/autotest_common.sh@936 -- # '[' -z 66563 ']' 00:11:36.624 14:06:39 -- common/autotest_common.sh@940 -- # kill -0 66563 00:11:36.624 14:06:39 -- common/autotest_common.sh@941 -- # uname 00:11:36.624 14:06:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:36.624 14:06:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66563 00:11:36.624 14:06:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:36.624 14:06:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:36.624 killing process with pid 66563 00:11:36.624 14:06:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66563' 00:11:36.624 14:06:39 -- common/autotest_common.sh@955 -- # kill 66563 00:11:36.624 14:06:39 -- common/autotest_common.sh@960 -- # wait 66563 00:11:38.011 00:11:38.011 real 0m3.990s 00:11:38.011 user 0m7.255s 00:11:38.011 sys 0m0.773s 00:11:38.011 ************************************ 00:11:38.011 END TEST nvme_rpc 00:11:38.011 ************************************ 00:11:38.011 14:06:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:38.011 14:06:40 -- common/autotest_common.sh@10 -- # set +x 00:11:38.011 14:06:40 -- spdk/autotest.sh@234 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:38.011 14:06:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:38.011 14:06:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:38.011 14:06:40 -- common/autotest_common.sh@10 -- # set +x 00:11:38.011 ************************************ 00:11:38.011 START TEST nvme_rpc_timeouts 00:11:38.011 ************************************ 00:11:38.011 14:06:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:38.272 * Looking for test storage... 00:11:38.272 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:38.272 14:06:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:38.272 14:06:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:38.272 14:06:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:38.272 14:06:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:38.272 14:06:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:38.272 14:06:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:38.272 14:06:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:38.272 14:06:41 -- scripts/common.sh@335 -- # IFS=.-: 00:11:38.272 14:06:41 -- scripts/common.sh@335 -- # read -ra ver1 00:11:38.272 14:06:41 -- scripts/common.sh@336 -- # IFS=.-: 00:11:38.272 14:06:41 -- scripts/common.sh@336 -- # read -ra ver2 00:11:38.272 14:06:41 -- scripts/common.sh@337 -- # local 'op=<' 00:11:38.272 14:06:41 -- scripts/common.sh@339 -- # ver1_l=2 00:11:38.272 14:06:41 -- scripts/common.sh@340 -- # ver2_l=1 00:11:38.272 14:06:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:38.272 14:06:41 -- scripts/common.sh@343 -- # case "$op" in 00:11:38.272 14:06:41 -- scripts/common.sh@344 -- # : 1 00:11:38.272 14:06:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:38.272 14:06:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:38.272 14:06:41 -- scripts/common.sh@364 -- # decimal 1 00:11:38.272 14:06:41 -- scripts/common.sh@352 -- # local d=1 00:11:38.272 14:06:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:38.272 14:06:41 -- scripts/common.sh@354 -- # echo 1 00:11:38.272 14:06:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:38.272 14:06:41 -- scripts/common.sh@365 -- # decimal 2 00:11:38.272 14:06:41 -- scripts/common.sh@352 -- # local d=2 00:11:38.272 14:06:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:38.272 14:06:41 -- scripts/common.sh@354 -- # echo 2 00:11:38.272 14:06:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:38.272 14:06:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:38.273 14:06:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:38.273 14:06:41 -- scripts/common.sh@367 -- # return 0 00:11:38.273 14:06:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:38.273 14:06:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:38.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.273 --rc genhtml_branch_coverage=1 00:11:38.273 --rc genhtml_function_coverage=1 00:11:38.273 --rc genhtml_legend=1 00:11:38.273 --rc geninfo_all_blocks=1 00:11:38.273 --rc geninfo_unexecuted_blocks=1 00:11:38.273 00:11:38.273 ' 00:11:38.273 14:06:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:38.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.273 --rc genhtml_branch_coverage=1 00:11:38.273 --rc genhtml_function_coverage=1 00:11:38.273 --rc genhtml_legend=1 00:11:38.273 --rc geninfo_all_blocks=1 00:11:38.273 --rc geninfo_unexecuted_blocks=1 00:11:38.273 00:11:38.273 ' 00:11:38.273 14:06:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:38.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.273 --rc genhtml_branch_coverage=1 00:11:38.273 --rc genhtml_function_coverage=1 00:11:38.273 --rc genhtml_legend=1 00:11:38.273 --rc geninfo_all_blocks=1 00:11:38.273 --rc geninfo_unexecuted_blocks=1 00:11:38.273 00:11:38.273 ' 00:11:38.273 14:06:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:38.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.273 --rc genhtml_branch_coverage=1 00:11:38.273 --rc genhtml_function_coverage=1 00:11:38.273 --rc genhtml_legend=1 00:11:38.273 --rc geninfo_all_blocks=1 00:11:38.273 --rc geninfo_unexecuted_blocks=1 00:11:38.273 00:11:38.273 ' 00:11:38.273 14:06:41 -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:38.273 14:06:41 -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_66636 00:11:38.273 14:06:41 -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_66636 00:11:38.273 14:06:41 -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:38.273 14:06:41 -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=66672 00:11:38.273 14:06:41 -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:11:38.273 14:06:41 -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 66672 00:11:38.273 14:06:41 -- common/autotest_common.sh@829 -- # '[' -z 66672 ']' 00:11:38.273 14:06:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.273 14:06:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:38.273 14:06:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:38.273 14:06:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:38.273 14:06:41 -- common/autotest_common.sh@10 -- # set +x 00:11:38.273 [2024-12-08 14:06:41.152545] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:38.273 [2024-12-08 14:06:41.152687] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66672 ] 00:11:38.534 [2024-12-08 14:06:41.300688] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:38.795 [2024-12-08 14:06:41.472572] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:38.795 [2024-12-08 14:06:41.473008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:38.795 [2024-12-08 14:06:41.473010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.738 14:06:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:39.738 14:06:42 -- common/autotest_common.sh@862 -- # return 0 00:11:39.738 Checking default timeout settings: 00:11:39.738 14:06:42 -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:11:39.738 14:06:42 -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:40.309 Making settings changes with rpc: 00:11:40.309 14:06:42 -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:11:40.309 14:06:42 -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:11:40.309 Check default vs. modified settings: 00:11:40.309 14:06:43 -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:11:40.309 14:06:43 -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:40.568 14:06:43 -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:11:40.568 14:06:43 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:40.568 14:06:43 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_66636 00:11:40.568 14:06:43 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:40.568 14:06:43 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:40.568 14:06:43 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:11:40.568 14:06:43 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:40.568 14:06:43 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_66636 00:11:40.568 14:06:43 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:40.568 14:06:43 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:11:40.568 14:06:43 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:11:40.568 Setting action_on_timeout is changed as expected. 00:11:40.568 14:06:43 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:11:40.568 14:06:43 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:40.568 14:06:43 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_66636 00:11:40.568 14:06:43 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:40.568 14:06:43 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:40.568 14:06:43 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:40.568 14:06:43 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_66636 00:11:40.568 14:06:43 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:40.568 14:06:43 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:40.568 14:06:43 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:11:40.568 14:06:43 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:11:40.568 14:06:43 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:11:40.568 Setting timeout_us is changed as expected. 00:11:40.568 14:06:43 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:40.568 14:06:43 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_66636 00:11:40.568 14:06:43 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:40.568 14:06:43 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:40.568 14:06:43 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:40.568 14:06:43 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_66636 00:11:40.568 14:06:43 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:40.568 14:06:43 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:40.568 14:06:43 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:11:40.568 14:06:43 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:11:40.568 Setting timeout_admin_us is changed as expected. 00:11:40.568 14:06:43 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:11:40.568 14:06:43 -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:11:40.568 14:06:43 -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_66636 /tmp/settings_modified_66636 00:11:40.827 14:06:43 -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 66672 00:11:40.827 14:06:43 -- common/autotest_common.sh@936 -- # '[' -z 66672 ']' 00:11:40.827 14:06:43 -- common/autotest_common.sh@940 -- # kill -0 66672 00:11:40.827 14:06:43 -- common/autotest_common.sh@941 -- # uname 00:11:40.827 14:06:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:40.827 14:06:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66672 00:11:40.827 14:06:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:40.827 14:06:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:40.827 killing process with pid 66672 00:11:40.827 14:06:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66672' 00:11:40.827 14:06:43 -- common/autotest_common.sh@955 -- # kill 66672 00:11:40.827 14:06:43 -- common/autotest_common.sh@960 -- # wait 66672 00:11:42.207 RPC TIMEOUT SETTING TEST PASSED. 00:11:42.207 14:06:44 -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:11:42.207 00:11:42.207 real 0m3.847s 00:11:42.207 user 0m7.405s 00:11:42.207 sys 0m0.561s 00:11:42.207 14:06:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:42.207 ************************************ 00:11:42.207 END TEST nvme_rpc_timeouts 00:11:42.207 ************************************ 00:11:42.207 14:06:44 -- common/autotest_common.sh@10 -- # set +x 00:11:42.207 14:06:44 -- spdk/autotest.sh@238 -- # '[' 1 -eq 0 ']' 00:11:42.207 14:06:44 -- spdk/autotest.sh@242 -- # [[ 1 -eq 1 ]] 00:11:42.207 14:06:44 -- spdk/autotest.sh@243 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:42.207 14:06:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:42.207 14:06:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:42.207 14:06:44 -- common/autotest_common.sh@10 -- # set +x 00:11:42.207 ************************************ 00:11:42.207 START TEST nvme_xnvme 00:11:42.207 ************************************ 00:11:42.207 14:06:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:42.207 * Looking for test storage... 00:11:42.207 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:42.207 14:06:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:42.207 14:06:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:42.207 14:06:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:42.207 14:06:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:42.207 14:06:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:42.207 14:06:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:42.207 14:06:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:42.207 14:06:44 -- scripts/common.sh@335 -- # IFS=.-: 00:11:42.207 14:06:44 -- scripts/common.sh@335 -- # read -ra ver1 00:11:42.207 14:06:44 -- scripts/common.sh@336 -- # IFS=.-: 00:11:42.207 14:06:44 -- scripts/common.sh@336 -- # read -ra ver2 00:11:42.207 14:06:44 -- scripts/common.sh@337 -- # local 'op=<' 00:11:42.207 14:06:44 -- scripts/common.sh@339 -- # ver1_l=2 00:11:42.207 14:06:44 -- scripts/common.sh@340 -- # ver2_l=1 00:11:42.207 14:06:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:42.207 14:06:44 -- scripts/common.sh@343 -- # case "$op" in 00:11:42.207 14:06:44 -- scripts/common.sh@344 -- # : 1 00:11:42.207 14:06:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:42.207 14:06:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:42.207 14:06:44 -- scripts/common.sh@364 -- # decimal 1 00:11:42.207 14:06:44 -- scripts/common.sh@352 -- # local d=1 00:11:42.207 14:06:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:42.207 14:06:44 -- scripts/common.sh@354 -- # echo 1 00:11:42.207 14:06:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:42.207 14:06:44 -- scripts/common.sh@365 -- # decimal 2 00:11:42.207 14:06:44 -- scripts/common.sh@352 -- # local d=2 00:11:42.207 14:06:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:42.207 14:06:44 -- scripts/common.sh@354 -- # echo 2 00:11:42.207 14:06:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:42.207 14:06:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:42.207 14:06:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:42.207 14:06:44 -- scripts/common.sh@367 -- # return 0 00:11:42.207 14:06:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:42.207 14:06:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:42.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.207 --rc genhtml_branch_coverage=1 00:11:42.207 --rc genhtml_function_coverage=1 00:11:42.207 --rc genhtml_legend=1 00:11:42.207 --rc geninfo_all_blocks=1 00:11:42.207 --rc geninfo_unexecuted_blocks=1 00:11:42.207 00:11:42.207 ' 00:11:42.207 14:06:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:42.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.207 --rc genhtml_branch_coverage=1 00:11:42.207 --rc genhtml_function_coverage=1 00:11:42.207 --rc genhtml_legend=1 00:11:42.207 --rc geninfo_all_blocks=1 00:11:42.207 --rc geninfo_unexecuted_blocks=1 00:11:42.207 00:11:42.207 ' 00:11:42.207 14:06:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:42.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.207 --rc genhtml_branch_coverage=1 00:11:42.207 --rc genhtml_function_coverage=1 00:11:42.207 --rc genhtml_legend=1 00:11:42.207 --rc geninfo_all_blocks=1 00:11:42.207 --rc geninfo_unexecuted_blocks=1 00:11:42.207 00:11:42.207 ' 00:11:42.207 14:06:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:42.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.207 --rc genhtml_branch_coverage=1 00:11:42.207 --rc genhtml_function_coverage=1 00:11:42.207 --rc genhtml_legend=1 00:11:42.207 --rc geninfo_all_blocks=1 00:11:42.207 --rc geninfo_unexecuted_blocks=1 00:11:42.207 00:11:42.207 ' 00:11:42.207 14:06:44 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:42.207 14:06:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:42.207 14:06:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:42.207 14:06:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:42.207 14:06:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.207 14:06:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.207 14:06:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.207 14:06:44 -- paths/export.sh@5 -- # export PATH 00:11:42.207 14:06:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.207 14:06:44 -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:11:42.207 14:06:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:42.207 14:06:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:42.207 14:06:44 -- common/autotest_common.sh@10 -- # set +x 00:11:42.207 ************************************ 00:11:42.207 START TEST xnvme_to_malloc_dd_copy 00:11:42.207 ************************************ 00:11:42.207 14:06:44 -- common/autotest_common.sh@1114 -- # malloc_to_xnvme_copy 00:11:42.207 14:06:44 -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:11:42.207 14:06:44 -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:11:42.207 14:06:44 -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:11:42.207 14:06:45 -- dd/common.sh@191 -- # return 00:11:42.207 14:06:45 -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:11:42.207 14:06:45 -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:11:42.207 14:06:45 -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:11:42.207 14:06:45 -- xnvme/xnvme.sh@18 -- # local io 00:11:42.207 14:06:45 -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:11:42.207 14:06:45 -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:11:42.207 14:06:45 -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:11:42.207 14:06:45 -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:11:42.207 14:06:45 -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:11:42.207 14:06:45 -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:11:42.207 14:06:45 -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:11:42.207 14:06:45 -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:11:42.207 14:06:45 -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:11:42.207 14:06:45 -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:11:42.207 14:06:45 -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:11:42.208 14:06:45 -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:11:42.208 14:06:45 -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:11:42.208 14:06:45 -- xnvme/xnvme.sh@42 -- # gen_conf 00:11:42.208 14:06:45 -- dd/common.sh@31 -- # xtrace_disable 00:11:42.208 14:06:45 -- common/autotest_common.sh@10 -- # set +x 00:11:42.208 { 00:11:42.208 "subsystems": [ 00:11:42.208 { 00:11:42.208 "subsystem": "bdev", 00:11:42.208 "config": [ 00:11:42.208 { 00:11:42.208 "params": { 00:11:42.208 "block_size": 512, 00:11:42.208 "num_blocks": 2097152, 00:11:42.208 "name": "malloc0" 00:11:42.208 }, 00:11:42.208 "method": "bdev_malloc_create" 00:11:42.208 }, 00:11:42.208 { 00:11:42.208 "params": { 00:11:42.208 "io_mechanism": "libaio", 00:11:42.208 "filename": "/dev/nullb0", 00:11:42.208 "name": "null0" 00:11:42.208 }, 00:11:42.208 "method": "bdev_xnvme_create" 00:11:42.208 }, 00:11:42.208 { 00:11:42.208 "method": "bdev_wait_for_examine" 00:11:42.208 } 00:11:42.208 ] 00:11:42.208 } 00:11:42.208 ] 00:11:42.208 } 00:11:42.208 [2024-12-08 14:06:45.093486] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:42.208 [2024-12-08 14:06:45.093633] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66812 ] 00:11:42.468 [2024-12-08 14:06:45.250202] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:42.727 [2024-12-08 14:06:45.517746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.276  [2024-12-08T14:06:49.139Z] Copying: 227/1024 [MB] (227 MBps) [2024-12-08T14:06:50.081Z] Copying: 510/1024 [MB] (282 MBps) [2024-12-08T14:06:50.654Z] Copying: 817/1024 [MB] (307 MBps) [2024-12-08T14:06:52.575Z] Copying: 1024/1024 [MB] (average 278 MBps) 00:11:49.655 00:11:49.655 14:06:52 -- xnvme/xnvme.sh@47 -- # gen_conf 00:11:49.655 14:06:52 -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:11:49.655 14:06:52 -- dd/common.sh@31 -- # xtrace_disable 00:11:49.655 14:06:52 -- common/autotest_common.sh@10 -- # set +x 00:11:49.916 { 00:11:49.916 "subsystems": [ 00:11:49.916 { 00:11:49.916 "subsystem": "bdev", 00:11:49.916 "config": [ 00:11:49.916 { 00:11:49.916 "params": { 00:11:49.916 "block_size": 512, 00:11:49.916 "num_blocks": 2097152, 00:11:49.916 "name": "malloc0" 00:11:49.916 }, 00:11:49.916 "method": "bdev_malloc_create" 00:11:49.916 }, 00:11:49.916 { 00:11:49.916 "params": { 00:11:49.916 "io_mechanism": "libaio", 00:11:49.916 "filename": "/dev/nullb0", 00:11:49.916 "name": "null0" 00:11:49.916 }, 00:11:49.916 "method": "bdev_xnvme_create" 00:11:49.916 }, 00:11:49.916 { 00:11:49.916 "method": "bdev_wait_for_examine" 00:11:49.916 } 00:11:49.916 ] 00:11:49.916 } 00:11:49.916 ] 00:11:49.916 } 00:11:49.916 [2024-12-08 14:06:52.619994] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:49.916 [2024-12-08 14:06:52.620576] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66901 ] 00:11:49.916 [2024-12-08 14:06:52.770173] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.175 [2024-12-08 14:06:52.946078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.086  [2024-12-08T14:06:55.945Z] Copying: 307/1024 [MB] (307 MBps) [2024-12-08T14:06:56.886Z] Copying: 616/1024 [MB] (308 MBps) [2024-12-08T14:06:57.145Z] Copying: 924/1024 [MB] (308 MBps) [2024-12-08T14:06:59.689Z] Copying: 1024/1024 [MB] (average 308 MBps) 00:11:56.769 00:11:56.769 14:06:59 -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:11:56.769 14:06:59 -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:11:56.769 14:06:59 -- xnvme/xnvme.sh@42 -- # gen_conf 00:11:56.769 14:06:59 -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:11:56.769 14:06:59 -- dd/common.sh@31 -- # xtrace_disable 00:11:56.769 14:06:59 -- common/autotest_common.sh@10 -- # set +x 00:11:56.769 { 00:11:56.769 "subsystems": [ 00:11:56.769 { 00:11:56.769 "subsystem": "bdev", 00:11:56.769 "config": [ 00:11:56.769 { 00:11:56.769 "params": { 00:11:56.769 "block_size": 512, 00:11:56.769 "num_blocks": 2097152, 00:11:56.769 "name": "malloc0" 00:11:56.769 }, 00:11:56.769 "method": "bdev_malloc_create" 00:11:56.769 }, 00:11:56.769 { 00:11:56.769 "params": { 00:11:56.769 "io_mechanism": "io_uring", 00:11:56.769 "filename": "/dev/nullb0", 00:11:56.769 "name": "null0" 00:11:56.769 }, 00:11:56.769 "method": "bdev_xnvme_create" 00:11:56.769 }, 00:11:56.769 { 00:11:56.769 "method": "bdev_wait_for_examine" 00:11:56.769 } 00:11:56.769 ] 00:11:56.769 } 00:11:56.769 ] 00:11:56.769 } 00:11:56.769 [2024-12-08 14:06:59.260226] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:56.769 [2024-12-08 14:06:59.260319] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66987 ] 00:11:56.769 [2024-12-08 14:06:59.393767] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.769 [2024-12-08 14:06:59.563105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.686  [2024-12-08T14:07:02.548Z] Copying: 317/1024 [MB] (317 MBps) [2024-12-08T14:07:03.489Z] Copying: 636/1024 [MB] (318 MBps) [2024-12-08T14:07:03.749Z] Copying: 955/1024 [MB] (319 MBps) [2024-12-08T14:07:06.293Z] Copying: 1024/1024 [MB] (average 318 MBps) 00:12:03.373 00:12:03.373 14:07:05 -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:03.373 14:07:05 -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:03.373 14:07:05 -- dd/common.sh@31 -- # xtrace_disable 00:12:03.373 14:07:05 -- common/autotest_common.sh@10 -- # set +x 00:12:03.373 { 00:12:03.373 "subsystems": [ 00:12:03.373 { 00:12:03.373 "subsystem": "bdev", 00:12:03.373 "config": [ 00:12:03.373 { 00:12:03.373 "params": { 00:12:03.373 "block_size": 512, 00:12:03.373 "num_blocks": 2097152, 00:12:03.373 "name": "malloc0" 00:12:03.373 }, 00:12:03.373 "method": "bdev_malloc_create" 00:12:03.373 }, 00:12:03.373 { 00:12:03.373 "params": { 00:12:03.373 "io_mechanism": "io_uring", 00:12:03.373 "filename": "/dev/nullb0", 00:12:03.373 "name": "null0" 00:12:03.373 }, 00:12:03.373 "method": "bdev_xnvme_create" 00:12:03.373 }, 00:12:03.373 { 00:12:03.373 "method": "bdev_wait_for_examine" 00:12:03.373 } 00:12:03.373 ] 00:12:03.373 } 00:12:03.373 ] 00:12:03.373 } 00:12:03.373 [2024-12-08 14:07:05.801972] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:03.373 [2024-12-08 14:07:05.802105] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67064 ] 00:12:03.373 [2024-12-08 14:07:05.951475] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.373 [2024-12-08 14:07:06.120471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.288  [2024-12-08T14:07:09.153Z] Copying: 320/1024 [MB] (320 MBps) [2024-12-08T14:07:10.097Z] Copying: 642/1024 [MB] (321 MBps) [2024-12-08T14:07:10.358Z] Copying: 964/1024 [MB] (321 MBps) [2024-12-08T14:07:12.275Z] Copying: 1024/1024 [MB] (average 321 MBps) 00:12:09.355 00:12:09.355 14:07:12 -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:12:09.355 14:07:12 -- dd/common.sh@195 -- # modprobe -r null_blk 00:12:09.617 00:12:09.617 real 0m27.310s 00:12:09.617 user 0m23.636s 00:12:09.617 sys 0m3.116s 00:12:09.617 14:07:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:09.617 ************************************ 00:12:09.617 END TEST xnvme_to_malloc_dd_copy 00:12:09.617 ************************************ 00:12:09.617 14:07:12 -- common/autotest_common.sh@10 -- # set +x 00:12:09.617 14:07:12 -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:09.617 14:07:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:09.617 14:07:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:09.617 14:07:12 -- common/autotest_common.sh@10 -- # set +x 00:12:09.617 ************************************ 00:12:09.617 START TEST xnvme_bdevperf 00:12:09.617 ************************************ 00:12:09.617 14:07:12 -- common/autotest_common.sh@1114 -- # xnvme_bdevperf 00:12:09.617 14:07:12 -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:12:09.617 14:07:12 -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:12:09.617 14:07:12 -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:12:09.617 14:07:12 -- dd/common.sh@191 -- # return 00:12:09.617 14:07:12 -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:12:09.617 14:07:12 -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:09.617 14:07:12 -- xnvme/xnvme.sh@60 -- # local io 00:12:09.617 14:07:12 -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:12:09.617 14:07:12 -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:12:09.617 14:07:12 -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:12:09.617 14:07:12 -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:12:09.617 14:07:12 -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:12:09.617 14:07:12 -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:09.617 14:07:12 -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:09.617 14:07:12 -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:09.617 14:07:12 -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:09.617 14:07:12 -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:09.617 14:07:12 -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:09.617 14:07:12 -- dd/common.sh@31 -- # xtrace_disable 00:12:09.617 14:07:12 -- common/autotest_common.sh@10 -- # set +x 00:12:09.617 { 00:12:09.617 "subsystems": [ 00:12:09.617 { 00:12:09.617 "subsystem": "bdev", 00:12:09.617 "config": [ 00:12:09.617 { 00:12:09.617 "params": { 00:12:09.617 "io_mechanism": "libaio", 00:12:09.617 "filename": "/dev/nullb0", 00:12:09.617 "name": "null0" 00:12:09.617 }, 00:12:09.617 "method": "bdev_xnvme_create" 00:12:09.617 }, 00:12:09.617 { 00:12:09.617 "method": "bdev_wait_for_examine" 00:12:09.617 } 00:12:09.617 ] 00:12:09.617 } 00:12:09.617 ] 00:12:09.617 } 00:12:09.617 [2024-12-08 14:07:12.453067] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:09.617 [2024-12-08 14:07:12.453177] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67169 ] 00:12:09.879 [2024-12-08 14:07:12.600491] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:09.879 [2024-12-08 14:07:12.766484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.140 Running I/O for 5 seconds... 00:12:15.434 00:12:15.434 Latency(us) 00:12:15.434 [2024-12-08T14:07:18.354Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:15.434 [2024-12-08T14:07:18.354Z] Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:15.434 null0 : 5.00 209505.41 818.38 0.00 0.00 303.35 111.85 658.51 00:12:15.435 [2024-12-08T14:07:18.355Z] =================================================================================================================== 00:12:15.435 [2024-12-08T14:07:18.355Z] Total : 209505.41 818.38 0.00 0.00 303.35 111.85 658.51 00:12:16.007 14:07:18 -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:16.007 14:07:18 -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:16.007 14:07:18 -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:16.007 14:07:18 -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:16.007 14:07:18 -- dd/common.sh@31 -- # xtrace_disable 00:12:16.007 14:07:18 -- common/autotest_common.sh@10 -- # set +x 00:12:16.007 { 00:12:16.007 "subsystems": [ 00:12:16.007 { 00:12:16.007 "subsystem": "bdev", 00:12:16.007 "config": [ 00:12:16.007 { 00:12:16.007 "params": { 00:12:16.007 "io_mechanism": "io_uring", 00:12:16.007 "filename": "/dev/nullb0", 00:12:16.007 "name": "null0" 00:12:16.007 }, 00:12:16.007 "method": "bdev_xnvme_create" 00:12:16.007 }, 00:12:16.007 { 00:12:16.007 "method": "bdev_wait_for_examine" 00:12:16.007 } 00:12:16.007 ] 00:12:16.007 } 00:12:16.007 ] 00:12:16.007 } 00:12:16.007 [2024-12-08 14:07:18.736927] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:16.007 [2024-12-08 14:07:18.737050] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67243 ] 00:12:16.007 [2024-12-08 14:07:18.884199] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:16.268 [2024-12-08 14:07:19.048772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.529 Running I/O for 5 seconds... 00:12:21.821 00:12:21.821 Latency(us) 00:12:21.821 [2024-12-08T14:07:24.741Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:21.821 [2024-12-08T14:07:24.741Z] Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:21.821 null0 : 5.00 236761.54 924.85 0.00 0.00 268.18 155.18 633.30 00:12:21.821 [2024-12-08T14:07:24.741Z] =================================================================================================================== 00:12:21.821 [2024-12-08T14:07:24.741Z] Total : 236761.54 924.85 0.00 0.00 268.18 155.18 633.30 00:12:22.083 14:07:24 -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:12:22.083 14:07:24 -- dd/common.sh@195 -- # modprobe -r null_blk 00:12:22.083 00:12:22.083 real 0m12.590s 00:12:22.083 user 0m10.064s 00:12:22.083 sys 0m2.280s 00:12:22.083 14:07:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:22.083 ************************************ 00:12:22.083 END TEST xnvme_bdevperf 00:12:22.083 ************************************ 00:12:22.083 14:07:24 -- common/autotest_common.sh@10 -- # set +x 00:12:22.083 00:12:22.083 real 0m40.164s 00:12:22.083 user 0m33.802s 00:12:22.083 sys 0m5.525s 00:12:22.083 14:07:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:22.083 14:07:24 -- common/autotest_common.sh@10 -- # set +x 00:12:22.345 ************************************ 00:12:22.345 END TEST nvme_xnvme 00:12:22.345 ************************************ 00:12:22.345 14:07:25 -- spdk/autotest.sh@244 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:22.345 14:07:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:22.345 14:07:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:22.345 14:07:25 -- common/autotest_common.sh@10 -- # set +x 00:12:22.345 ************************************ 00:12:22.345 START TEST blockdev_xnvme 00:12:22.345 ************************************ 00:12:22.345 14:07:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:22.345 * Looking for test storage... 00:12:22.345 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:22.345 14:07:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:22.345 14:07:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:22.345 14:07:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:22.345 14:07:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:22.345 14:07:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:22.345 14:07:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:22.345 14:07:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:22.345 14:07:25 -- scripts/common.sh@335 -- # IFS=.-: 00:12:22.345 14:07:25 -- scripts/common.sh@335 -- # read -ra ver1 00:12:22.345 14:07:25 -- scripts/common.sh@336 -- # IFS=.-: 00:12:22.345 14:07:25 -- scripts/common.sh@336 -- # read -ra ver2 00:12:22.345 14:07:25 -- scripts/common.sh@337 -- # local 'op=<' 00:12:22.345 14:07:25 -- scripts/common.sh@339 -- # ver1_l=2 00:12:22.345 14:07:25 -- scripts/common.sh@340 -- # ver2_l=1 00:12:22.345 14:07:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:22.345 14:07:25 -- scripts/common.sh@343 -- # case "$op" in 00:12:22.345 14:07:25 -- scripts/common.sh@344 -- # : 1 00:12:22.345 14:07:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:22.345 14:07:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:22.345 14:07:25 -- scripts/common.sh@364 -- # decimal 1 00:12:22.345 14:07:25 -- scripts/common.sh@352 -- # local d=1 00:12:22.345 14:07:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:22.345 14:07:25 -- scripts/common.sh@354 -- # echo 1 00:12:22.345 14:07:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:22.345 14:07:25 -- scripts/common.sh@365 -- # decimal 2 00:12:22.345 14:07:25 -- scripts/common.sh@352 -- # local d=2 00:12:22.345 14:07:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:22.345 14:07:25 -- scripts/common.sh@354 -- # echo 2 00:12:22.345 14:07:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:22.346 14:07:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:22.346 14:07:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:22.346 14:07:25 -- scripts/common.sh@367 -- # return 0 00:12:22.346 14:07:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:22.346 14:07:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:22.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.346 --rc genhtml_branch_coverage=1 00:12:22.346 --rc genhtml_function_coverage=1 00:12:22.346 --rc genhtml_legend=1 00:12:22.346 --rc geninfo_all_blocks=1 00:12:22.346 --rc geninfo_unexecuted_blocks=1 00:12:22.346 00:12:22.346 ' 00:12:22.346 14:07:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:22.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.346 --rc genhtml_branch_coverage=1 00:12:22.346 --rc genhtml_function_coverage=1 00:12:22.346 --rc genhtml_legend=1 00:12:22.346 --rc geninfo_all_blocks=1 00:12:22.346 --rc geninfo_unexecuted_blocks=1 00:12:22.346 00:12:22.346 ' 00:12:22.346 14:07:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:22.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.346 --rc genhtml_branch_coverage=1 00:12:22.346 --rc genhtml_function_coverage=1 00:12:22.346 --rc genhtml_legend=1 00:12:22.346 --rc geninfo_all_blocks=1 00:12:22.346 --rc geninfo_unexecuted_blocks=1 00:12:22.346 00:12:22.346 ' 00:12:22.346 14:07:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:22.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.346 --rc genhtml_branch_coverage=1 00:12:22.346 --rc genhtml_function_coverage=1 00:12:22.346 --rc genhtml_legend=1 00:12:22.346 --rc geninfo_all_blocks=1 00:12:22.346 --rc geninfo_unexecuted_blocks=1 00:12:22.346 00:12:22.346 ' 00:12:22.346 14:07:25 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:22.346 14:07:25 -- bdev/nbd_common.sh@6 -- # set -e 00:12:22.346 14:07:25 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:22.346 14:07:25 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:22.346 14:07:25 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:22.346 14:07:25 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:22.346 14:07:25 -- bdev/blockdev.sh@18 -- # : 00:12:22.346 14:07:25 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:12:22.346 14:07:25 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:12:22.346 14:07:25 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:12:22.346 14:07:25 -- bdev/blockdev.sh@672 -- # uname -s 00:12:22.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.346 14:07:25 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:12:22.346 14:07:25 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:12:22.346 14:07:25 -- bdev/blockdev.sh@680 -- # test_type=xnvme 00:12:22.346 14:07:25 -- bdev/blockdev.sh@681 -- # crypto_device= 00:12:22.346 14:07:25 -- bdev/blockdev.sh@682 -- # dek= 00:12:22.346 14:07:25 -- bdev/blockdev.sh@683 -- # env_ctx= 00:12:22.346 14:07:25 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:12:22.346 14:07:25 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:12:22.346 14:07:25 -- bdev/blockdev.sh@688 -- # [[ xnvme == bdev ]] 00:12:22.346 14:07:25 -- bdev/blockdev.sh@688 -- # [[ xnvme == crypto_* ]] 00:12:22.346 14:07:25 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:12:22.346 14:07:25 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=67384 00:12:22.346 14:07:25 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:22.346 14:07:25 -- bdev/blockdev.sh@47 -- # waitforlisten 67384 00:12:22.346 14:07:25 -- common/autotest_common.sh@829 -- # '[' -z 67384 ']' 00:12:22.346 14:07:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.346 14:07:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:22.346 14:07:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.346 14:07:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:22.346 14:07:25 -- common/autotest_common.sh@10 -- # set +x 00:12:22.346 14:07:25 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:12:22.608 [2024-12-08 14:07:25.288057] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:22.608 [2024-12-08 14:07:25.288193] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67384 ] 00:12:22.608 [2024-12-08 14:07:25.441314] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.869 [2024-12-08 14:07:25.668675] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:22.869 [2024-12-08 14:07:25.668893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.258 14:07:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:24.258 14:07:26 -- common/autotest_common.sh@862 -- # return 0 00:12:24.258 14:07:26 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:12:24.258 14:07:26 -- bdev/blockdev.sh@727 -- # setup_xnvme_conf 00:12:24.258 14:07:26 -- bdev/blockdev.sh@86 -- # local io_mechanism=io_uring 00:12:24.258 14:07:26 -- bdev/blockdev.sh@87 -- # local nvme nvmes 00:12:24.258 14:07:26 -- bdev/blockdev.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:24.519 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:24.519 Waiting for block devices as requested 00:12:24.519 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:12:24.519 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:12:24.780 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:12:24.780 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:12:30.175 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:12:30.175 14:07:32 -- bdev/blockdev.sh@90 -- # get_zoned_devs 00:12:30.175 14:07:32 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:12:30.175 14:07:32 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:12:30.175 14:07:32 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:12:30.175 14:07:32 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:12:30.175 14:07:32 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0c0n1 00:12:30.175 14:07:32 -- common/autotest_common.sh@1657 -- # local device=nvme0c0n1 00:12:30.175 14:07:32 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:12:30.175 14:07:32 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:12:30.175 14:07:32 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:12:30.175 14:07:32 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:12:30.175 14:07:32 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:12:30.175 14:07:32 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:12:30.175 14:07:32 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:12:30.175 14:07:32 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:12:30.175 14:07:32 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:12:30.175 14:07:32 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:12:30.175 14:07:32 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:12:30.175 14:07:32 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:12:30.175 14:07:32 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:12:30.175 14:07:32 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:12:30.175 14:07:32 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:12:30.175 14:07:32 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:12:30.175 14:07:32 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:12:30.175 14:07:32 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:12:30.175 14:07:32 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:12:30.175 14:07:32 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:12:30.175 14:07:32 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:12:30.175 14:07:32 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:12:30.175 14:07:32 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:12:30.175 14:07:32 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:12:30.175 14:07:32 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:12:30.175 14:07:32 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:12:30.175 14:07:32 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:12:30.175 14:07:32 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:12:30.175 14:07:32 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:12:30.175 14:07:32 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:12:30.176 14:07:32 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:12:30.176 14:07:32 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:12:30.176 14:07:32 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:12:30.176 14:07:32 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme0n1 ]] 00:12:30.176 14:07:32 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:12:30.176 14:07:32 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:30.176 14:07:32 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:12:30.176 14:07:32 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n1 ]] 00:12:30.176 14:07:32 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:12:30.176 14:07:32 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:30.176 14:07:32 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:12:30.176 14:07:32 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n2 ]] 00:12:30.176 14:07:32 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:12:30.176 14:07:32 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:30.176 14:07:32 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:12:30.176 14:07:32 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n3 ]] 00:12:30.176 14:07:32 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:12:30.176 14:07:32 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:30.176 14:07:32 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:12:30.176 14:07:32 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme2n1 ]] 00:12:30.176 14:07:32 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:12:30.176 14:07:32 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:30.176 14:07:32 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:12:30.176 14:07:32 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme3n1 ]] 00:12:30.176 14:07:32 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:12:30.176 14:07:32 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:30.176 14:07:32 -- bdev/blockdev.sh@97 -- # (( 6 > 0 )) 00:12:30.176 14:07:32 -- bdev/blockdev.sh@98 -- # rpc_cmd 00:12:30.176 14:07:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.176 14:07:32 -- common/autotest_common.sh@10 -- # set +x 00:12:30.176 14:07:32 -- bdev/blockdev.sh@98 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme1n2 nvme1n2 io_uring' 'bdev_xnvme_create /dev/nvme1n3 nvme1n3 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:12:30.176 nvme0n1 00:12:30.176 nvme1n1 00:12:30.176 nvme1n2 00:12:30.176 nvme1n3 00:12:30.176 nvme2n1 00:12:30.176 nvme3n1 00:12:30.176 14:07:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.176 14:07:32 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:12:30.176 14:07:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.176 14:07:32 -- common/autotest_common.sh@10 -- # set +x 00:12:30.176 14:07:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.176 14:07:32 -- bdev/blockdev.sh@738 -- # cat 00:12:30.176 14:07:32 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:12:30.176 14:07:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.176 14:07:32 -- common/autotest_common.sh@10 -- # set +x 00:12:30.176 14:07:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.176 14:07:32 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:12:30.176 14:07:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.176 14:07:32 -- common/autotest_common.sh@10 -- # set +x 00:12:30.176 14:07:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.176 14:07:32 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:30.176 14:07:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.176 14:07:32 -- common/autotest_common.sh@10 -- # set +x 00:12:30.176 14:07:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.176 14:07:32 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:12:30.176 14:07:32 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:12:30.176 14:07:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.176 14:07:32 -- common/autotest_common.sh@10 -- # set +x 00:12:30.176 14:07:32 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:12:30.176 14:07:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.176 14:07:32 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:12:30.176 14:07:32 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "0afd7646-4c37-4f76-a355-7e4dd0f9ab39"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "0afd7646-4c37-4f76-a355-7e4dd0f9ab39",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "b3332101-3b63-450b-a911-45ff6b54954e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b3332101-3b63-450b-a911-45ff6b54954e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "0c8f58b8-8804-4edd-ac58-0989516f5308"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0c8f58b8-8804-4edd-ac58-0989516f5308",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "f00df6f5-6034-46c6-84bd-fd533749958a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f00df6f5-6034-46c6-84bd-fd533749958a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "9e657d4e-a787-4064-ac56-3d457e8ea88d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "9e657d4e-a787-4064-ac56-3d457e8ea88d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "e374070a-08af-4efa-9ab7-2b93048908fb"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "e374070a-08af-4efa-9ab7-2b93048908fb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:12:30.176 14:07:32 -- bdev/blockdev.sh@747 -- # jq -r .name 00:12:30.176 14:07:32 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:12:30.176 14:07:32 -- bdev/blockdev.sh@750 -- # hello_world_bdev=nvme0n1 00:12:30.176 14:07:32 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:12:30.176 14:07:32 -- bdev/blockdev.sh@752 -- # killprocess 67384 00:12:30.176 14:07:32 -- common/autotest_common.sh@936 -- # '[' -z 67384 ']' 00:12:30.176 14:07:32 -- common/autotest_common.sh@940 -- # kill -0 67384 00:12:30.176 14:07:32 -- common/autotest_common.sh@941 -- # uname 00:12:30.176 14:07:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:30.176 14:07:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67384 00:12:30.176 killing process with pid 67384 00:12:30.176 14:07:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:30.176 14:07:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:30.176 14:07:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67384' 00:12:30.176 14:07:32 -- common/autotest_common.sh@955 -- # kill 67384 00:12:30.176 14:07:32 -- common/autotest_common.sh@960 -- # wait 67384 00:12:31.563 14:07:34 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:31.563 14:07:34 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:31.563 14:07:34 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:12:31.563 14:07:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:31.563 14:07:34 -- common/autotest_common.sh@10 -- # set +x 00:12:31.563 ************************************ 00:12:31.563 START TEST bdev_hello_world 00:12:31.563 ************************************ 00:12:31.563 14:07:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:31.563 [2024-12-08 14:07:34.236529] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:31.563 [2024-12-08 14:07:34.236641] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67773 ] 00:12:31.563 [2024-12-08 14:07:34.387605] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.824 [2024-12-08 14:07:34.549652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.085 [2024-12-08 14:07:34.853712] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:32.085 [2024-12-08 14:07:34.853753] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:12:32.086 [2024-12-08 14:07:34.853765] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:32.086 [2024-12-08 14:07:34.855300] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:32.086 [2024-12-08 14:07:34.855744] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:32.086 [2024-12-08 14:07:34.855767] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:32.086 [2024-12-08 14:07:34.856121] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:32.086 00:12:32.086 [2024-12-08 14:07:34.856149] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:32.662 00:12:32.662 real 0m1.341s 00:12:32.662 user 0m1.042s 00:12:32.662 sys 0m0.180s 00:12:32.662 14:07:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:32.662 ************************************ 00:12:32.662 END TEST bdev_hello_world 00:12:32.662 ************************************ 00:12:32.662 14:07:35 -- common/autotest_common.sh@10 -- # set +x 00:12:32.662 14:07:35 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:12:32.662 14:07:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:32.662 14:07:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:32.662 14:07:35 -- common/autotest_common.sh@10 -- # set +x 00:12:32.662 ************************************ 00:12:32.662 START TEST bdev_bounds 00:12:32.662 ************************************ 00:12:32.662 14:07:35 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:12:32.662 14:07:35 -- bdev/blockdev.sh@288 -- # bdevio_pid=67804 00:12:32.662 14:07:35 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:32.662 Process bdevio pid: 67804 00:12:32.662 14:07:35 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 67804' 00:12:32.662 14:07:35 -- bdev/blockdev.sh@291 -- # waitforlisten 67804 00:12:32.662 14:07:35 -- common/autotest_common.sh@829 -- # '[' -z 67804 ']' 00:12:32.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.662 14:07:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.662 14:07:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:32.662 14:07:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.662 14:07:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:32.662 14:07:35 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:32.662 14:07:35 -- common/autotest_common.sh@10 -- # set +x 00:12:32.921 [2024-12-08 14:07:35.637284] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:32.921 [2024-12-08 14:07:35.637393] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67804 ] 00:12:32.921 [2024-12-08 14:07:35.781515] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:33.184 [2024-12-08 14:07:35.950659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:33.184 [2024-12-08 14:07:35.951146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.184 [2024-12-08 14:07:35.951155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:33.756 14:07:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:33.756 14:07:36 -- common/autotest_common.sh@862 -- # return 0 00:12:33.756 14:07:36 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:33.756 I/O targets: 00:12:33.756 nvme0n1: 262144 blocks of 4096 bytes (1024 MiB) 00:12:33.756 nvme1n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:33.756 nvme1n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:33.756 nvme1n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:33.756 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:12:33.756 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:12:33.756 00:12:33.756 00:12:33.756 CUnit - A unit testing framework for C - Version 2.1-3 00:12:33.756 http://cunit.sourceforge.net/ 00:12:33.756 00:12:33.756 00:12:33.756 Suite: bdevio tests on: nvme3n1 00:12:33.756 Test: blockdev write read block ...passed 00:12:33.756 Test: blockdev write zeroes read block ...passed 00:12:33.756 Test: blockdev write zeroes read no split ...passed 00:12:33.756 Test: blockdev write zeroes read split ...passed 00:12:33.756 Test: blockdev write zeroes read split partial ...passed 00:12:33.756 Test: blockdev reset ...passed 00:12:33.756 Test: blockdev write read 8 blocks ...passed 00:12:33.756 Test: blockdev write read size > 128k ...passed 00:12:33.756 Test: blockdev write read invalid size ...passed 00:12:33.756 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:33.756 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:33.756 Test: blockdev write read max offset ...passed 00:12:33.756 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:33.756 Test: blockdev writev readv 8 blocks ...passed 00:12:33.756 Test: blockdev writev readv 30 x 1block ...passed 00:12:33.756 Test: blockdev writev readv block ...passed 00:12:33.756 Test: blockdev writev readv size > 128k ...passed 00:12:33.756 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:33.756 Test: blockdev comparev and writev ...passed 00:12:33.756 Test: blockdev nvme passthru rw ...passed 00:12:33.756 Test: blockdev nvme passthru vendor specific ...passed 00:12:33.756 Test: blockdev nvme admin passthru ...passed 00:12:33.756 Test: blockdev copy ...passed 00:12:33.756 Suite: bdevio tests on: nvme2n1 00:12:33.757 Test: blockdev write read block ...passed 00:12:33.757 Test: blockdev write zeroes read block ...passed 00:12:33.757 Test: blockdev write zeroes read no split ...passed 00:12:33.757 Test: blockdev write zeroes read split ...passed 00:12:33.757 Test: blockdev write zeroes read split partial ...passed 00:12:33.757 Test: blockdev reset ...passed 00:12:33.757 Test: blockdev write read 8 blocks ...passed 00:12:33.757 Test: blockdev write read size > 128k ...passed 00:12:33.757 Test: blockdev write read invalid size ...passed 00:12:33.757 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:33.757 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:33.757 Test: blockdev write read max offset ...passed 00:12:33.757 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:33.757 Test: blockdev writev readv 8 blocks ...passed 00:12:33.757 Test: blockdev writev readv 30 x 1block ...passed 00:12:33.757 Test: blockdev writev readv block ...passed 00:12:33.757 Test: blockdev writev readv size > 128k ...passed 00:12:33.757 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:33.757 Test: blockdev comparev and writev ...passed 00:12:33.757 Test: blockdev nvme passthru rw ...passed 00:12:33.757 Test: blockdev nvme passthru vendor specific ...passed 00:12:33.757 Test: blockdev nvme admin passthru ...passed 00:12:33.757 Test: blockdev copy ...passed 00:12:33.757 Suite: bdevio tests on: nvme1n3 00:12:33.757 Test: blockdev write read block ...passed 00:12:33.757 Test: blockdev write zeroes read block ...passed 00:12:33.757 Test: blockdev write zeroes read no split ...passed 00:12:33.757 Test: blockdev write zeroes read split ...passed 00:12:34.018 Test: blockdev write zeroes read split partial ...passed 00:12:34.018 Test: blockdev reset ...passed 00:12:34.018 Test: blockdev write read 8 blocks ...passed 00:12:34.018 Test: blockdev write read size > 128k ...passed 00:12:34.018 Test: blockdev write read invalid size ...passed 00:12:34.019 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:34.019 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:34.019 Test: blockdev write read max offset ...passed 00:12:34.019 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:34.019 Test: blockdev writev readv 8 blocks ...passed 00:12:34.019 Test: blockdev writev readv 30 x 1block ...passed 00:12:34.019 Test: blockdev writev readv block ...passed 00:12:34.019 Test: blockdev writev readv size > 128k ...passed 00:12:34.019 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:34.019 Test: blockdev comparev and writev ...passed 00:12:34.019 Test: blockdev nvme passthru rw ...passed 00:12:34.019 Test: blockdev nvme passthru vendor specific ...passed 00:12:34.019 Test: blockdev nvme admin passthru ...passed 00:12:34.019 Test: blockdev copy ...passed 00:12:34.019 Suite: bdevio tests on: nvme1n2 00:12:34.019 Test: blockdev write read block ...passed 00:12:34.019 Test: blockdev write zeroes read block ...passed 00:12:34.019 Test: blockdev write zeroes read no split ...passed 00:12:34.019 Test: blockdev write zeroes read split ...passed 00:12:34.019 Test: blockdev write zeroes read split partial ...passed 00:12:34.019 Test: blockdev reset ...passed 00:12:34.019 Test: blockdev write read 8 blocks ...passed 00:12:34.019 Test: blockdev write read size > 128k ...passed 00:12:34.019 Test: blockdev write read invalid size ...passed 00:12:34.019 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:34.019 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:34.019 Test: blockdev write read max offset ...passed 00:12:34.019 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:34.019 Test: blockdev writev readv 8 blocks ...passed 00:12:34.019 Test: blockdev writev readv 30 x 1block ...passed 00:12:34.019 Test: blockdev writev readv block ...passed 00:12:34.019 Test: blockdev writev readv size > 128k ...passed 00:12:34.019 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:34.019 Test: blockdev comparev and writev ...passed 00:12:34.019 Test: blockdev nvme passthru rw ...passed 00:12:34.019 Test: blockdev nvme passthru vendor specific ...passed 00:12:34.019 Test: blockdev nvme admin passthru ...passed 00:12:34.019 Test: blockdev copy ...passed 00:12:34.019 Suite: bdevio tests on: nvme1n1 00:12:34.019 Test: blockdev write read block ...passed 00:12:34.019 Test: blockdev write zeroes read block ...passed 00:12:34.019 Test: blockdev write zeroes read no split ...passed 00:12:34.019 Test: blockdev write zeroes read split ...passed 00:12:34.019 Test: blockdev write zeroes read split partial ...passed 00:12:34.019 Test: blockdev reset ...passed 00:12:34.019 Test: blockdev write read 8 blocks ...passed 00:12:34.019 Test: blockdev write read size > 128k ...passed 00:12:34.019 Test: blockdev write read invalid size ...passed 00:12:34.019 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:34.019 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:34.019 Test: blockdev write read max offset ...passed 00:12:34.019 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:34.019 Test: blockdev writev readv 8 blocks ...passed 00:12:34.019 Test: blockdev writev readv 30 x 1block ...passed 00:12:34.019 Test: blockdev writev readv block ...passed 00:12:34.019 Test: blockdev writev readv size > 128k ...passed 00:12:34.019 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:34.019 Test: blockdev comparev and writev ...passed 00:12:34.019 Test: blockdev nvme passthru rw ...passed 00:12:34.019 Test: blockdev nvme passthru vendor specific ...passed 00:12:34.019 Test: blockdev nvme admin passthru ...passed 00:12:34.019 Test: blockdev copy ...passed 00:12:34.019 Suite: bdevio tests on: nvme0n1 00:12:34.019 Test: blockdev write read block ...passed 00:12:34.019 Test: blockdev write zeroes read block ...passed 00:12:34.019 Test: blockdev write zeroes read no split ...passed 00:12:34.019 Test: blockdev write zeroes read split ...passed 00:12:34.019 Test: blockdev write zeroes read split partial ...passed 00:12:34.019 Test: blockdev reset ...passed 00:12:34.019 Test: blockdev write read 8 blocks ...passed 00:12:34.019 Test: blockdev write read size > 128k ...passed 00:12:34.019 Test: blockdev write read invalid size ...passed 00:12:34.019 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:34.019 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:34.019 Test: blockdev write read max offset ...passed 00:12:34.019 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:34.019 Test: blockdev writev readv 8 blocks ...passed 00:12:34.019 Test: blockdev writev readv 30 x 1block ...passed 00:12:34.019 Test: blockdev writev readv block ...passed 00:12:34.019 Test: blockdev writev readv size > 128k ...passed 00:12:34.019 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:34.019 Test: blockdev comparev and writev ...passed 00:12:34.019 Test: blockdev nvme passthru rw ...passed 00:12:34.019 Test: blockdev nvme passthru vendor specific ...passed 00:12:34.019 Test: blockdev nvme admin passthru ...passed 00:12:34.019 Test: blockdev copy ...passed 00:12:34.019 00:12:34.019 Run Summary: Type Total Ran Passed Failed Inactive 00:12:34.019 suites 6 6 n/a 0 0 00:12:34.019 tests 138 138 138 0 0 00:12:34.019 asserts 780 780 780 0 n/a 00:12:34.019 00:12:34.019 Elapsed time = 1.055 seconds 00:12:34.019 0 00:12:34.019 14:07:36 -- bdev/blockdev.sh@293 -- # killprocess 67804 00:12:34.019 14:07:36 -- common/autotest_common.sh@936 -- # '[' -z 67804 ']' 00:12:34.019 14:07:36 -- common/autotest_common.sh@940 -- # kill -0 67804 00:12:34.019 14:07:36 -- common/autotest_common.sh@941 -- # uname 00:12:34.019 14:07:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:34.019 14:07:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67804 00:12:34.281 killing process with pid 67804 00:12:34.281 14:07:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:34.281 14:07:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:34.281 14:07:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67804' 00:12:34.281 14:07:36 -- common/autotest_common.sh@955 -- # kill 67804 00:12:34.281 14:07:36 -- common/autotest_common.sh@960 -- # wait 67804 00:12:34.854 14:07:37 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:12:34.854 00:12:34.854 real 0m2.036s 00:12:34.854 user 0m4.723s 00:12:34.854 sys 0m0.269s 00:12:34.854 14:07:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:34.854 14:07:37 -- common/autotest_common.sh@10 -- # set +x 00:12:34.854 ************************************ 00:12:34.854 END TEST bdev_bounds 00:12:34.854 ************************************ 00:12:34.854 14:07:37 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:12:34.854 14:07:37 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:12:34.854 14:07:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:34.854 14:07:37 -- common/autotest_common.sh@10 -- # set +x 00:12:34.854 ************************************ 00:12:34.854 START TEST bdev_nbd 00:12:34.854 ************************************ 00:12:34.854 14:07:37 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:12:34.854 14:07:37 -- bdev/blockdev.sh@298 -- # uname -s 00:12:34.854 14:07:37 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:12:34.854 14:07:37 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:34.854 14:07:37 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:34.854 14:07:37 -- bdev/blockdev.sh@302 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:34.854 14:07:37 -- bdev/blockdev.sh@302 -- # local bdev_all 00:12:34.854 14:07:37 -- bdev/blockdev.sh@303 -- # local bdev_num=6 00:12:34.854 14:07:37 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:12:34.854 14:07:37 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:34.854 14:07:37 -- bdev/blockdev.sh@309 -- # local nbd_all 00:12:34.854 14:07:37 -- bdev/blockdev.sh@310 -- # bdev_num=6 00:12:34.854 14:07:37 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:34.854 14:07:37 -- bdev/blockdev.sh@312 -- # local nbd_list 00:12:34.854 14:07:37 -- bdev/blockdev.sh@313 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:34.854 14:07:37 -- bdev/blockdev.sh@313 -- # local bdev_list 00:12:34.854 14:07:37 -- bdev/blockdev.sh@316 -- # nbd_pid=67858 00:12:34.854 14:07:37 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:34.854 14:07:37 -- bdev/blockdev.sh@318 -- # waitforlisten 67858 /var/tmp/spdk-nbd.sock 00:12:34.854 14:07:37 -- common/autotest_common.sh@829 -- # '[' -z 67858 ']' 00:12:34.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:34.854 14:07:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:34.854 14:07:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:34.854 14:07:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:34.854 14:07:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:34.854 14:07:37 -- common/autotest_common.sh@10 -- # set +x 00:12:34.854 14:07:37 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:34.854 [2024-12-08 14:07:37.739759] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:34.854 [2024-12-08 14:07:37.740271] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:35.116 [2024-12-08 14:07:37.893135] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:35.378 [2024-12-08 14:07:38.163823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.767 14:07:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:36.767 14:07:39 -- common/autotest_common.sh@862 -- # return 0 00:12:36.767 14:07:39 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:12:36.767 14:07:39 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:36.767 14:07:39 -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:36.767 14:07:39 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:12:36.767 14:07:39 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:12:36.767 14:07:39 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:36.767 14:07:39 -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:36.767 14:07:39 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:12:36.767 14:07:39 -- bdev/nbd_common.sh@24 -- # local i 00:12:36.767 14:07:39 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:12:36.767 14:07:39 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:12:36.767 14:07:39 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:36.767 14:07:39 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:12:36.767 14:07:39 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:12:36.767 14:07:39 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:12:36.767 14:07:39 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:12:36.767 14:07:39 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:12:36.767 14:07:39 -- common/autotest_common.sh@867 -- # local i 00:12:36.767 14:07:39 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:36.767 14:07:39 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:36.767 14:07:39 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:12:36.767 14:07:39 -- common/autotest_common.sh@871 -- # break 00:12:36.767 14:07:39 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:36.767 14:07:39 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:36.767 14:07:39 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:36.767 1+0 records in 00:12:36.767 1+0 records out 00:12:36.767 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00116105 s, 3.5 MB/s 00:12:36.767 14:07:39 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:36.767 14:07:39 -- common/autotest_common.sh@884 -- # size=4096 00:12:36.767 14:07:39 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:36.767 14:07:39 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:36.767 14:07:39 -- common/autotest_common.sh@887 -- # return 0 00:12:36.767 14:07:39 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:36.767 14:07:39 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:36.767 14:07:39 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:12:37.029 14:07:39 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:12:37.029 14:07:39 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:12:37.029 14:07:39 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:12:37.029 14:07:39 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:12:37.029 14:07:39 -- common/autotest_common.sh@867 -- # local i 00:12:37.029 14:07:39 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:37.029 14:07:39 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:37.029 14:07:39 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:12:37.029 14:07:39 -- common/autotest_common.sh@871 -- # break 00:12:37.029 14:07:39 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:37.029 14:07:39 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:37.029 14:07:39 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:37.029 1+0 records in 00:12:37.029 1+0 records out 00:12:37.029 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108222 s, 3.8 MB/s 00:12:37.029 14:07:39 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:37.029 14:07:39 -- common/autotest_common.sh@884 -- # size=4096 00:12:37.029 14:07:39 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:37.029 14:07:39 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:37.029 14:07:39 -- common/autotest_common.sh@887 -- # return 0 00:12:37.029 14:07:39 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:37.029 14:07:39 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:37.029 14:07:39 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 00:12:37.029 14:07:39 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:12:37.029 14:07:39 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:12:37.029 14:07:39 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:12:37.029 14:07:39 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:12:37.029 14:07:39 -- common/autotest_common.sh@867 -- # local i 00:12:37.029 14:07:39 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:37.029 14:07:39 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:37.029 14:07:39 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:12:37.029 14:07:39 -- common/autotest_common.sh@871 -- # break 00:12:37.029 14:07:39 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:37.029 14:07:39 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:37.029 14:07:39 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:37.029 1+0 records in 00:12:37.029 1+0 records out 00:12:37.029 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000982015 s, 4.2 MB/s 00:12:37.029 14:07:39 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:37.290 14:07:39 -- common/autotest_common.sh@884 -- # size=4096 00:12:37.290 14:07:39 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:37.290 14:07:39 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:37.290 14:07:39 -- common/autotest_common.sh@887 -- # return 0 00:12:37.290 14:07:39 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:37.290 14:07:39 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:37.290 14:07:39 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 00:12:37.290 14:07:40 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:12:37.290 14:07:40 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:12:37.290 14:07:40 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:12:37.290 14:07:40 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:12:37.290 14:07:40 -- common/autotest_common.sh@867 -- # local i 00:12:37.290 14:07:40 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:37.290 14:07:40 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:37.290 14:07:40 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:12:37.290 14:07:40 -- common/autotest_common.sh@871 -- # break 00:12:37.290 14:07:40 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:37.290 14:07:40 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:37.290 14:07:40 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:37.290 1+0 records in 00:12:37.290 1+0 records out 00:12:37.290 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00102125 s, 4.0 MB/s 00:12:37.290 14:07:40 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:37.290 14:07:40 -- common/autotest_common.sh@884 -- # size=4096 00:12:37.290 14:07:40 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:37.290 14:07:40 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:37.290 14:07:40 -- common/autotest_common.sh@887 -- # return 0 00:12:37.290 14:07:40 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:37.290 14:07:40 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:37.290 14:07:40 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:12:37.551 14:07:40 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:12:37.551 14:07:40 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:12:37.551 14:07:40 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:12:37.551 14:07:40 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:12:37.551 14:07:40 -- common/autotest_common.sh@867 -- # local i 00:12:37.551 14:07:40 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:37.551 14:07:40 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:37.551 14:07:40 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:12:37.551 14:07:40 -- common/autotest_common.sh@871 -- # break 00:12:37.551 14:07:40 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:37.551 14:07:40 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:37.551 14:07:40 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:38.123 1+0 records in 00:12:38.123 1+0 records out 00:12:38.123 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.509653 s, 8.0 kB/s 00:12:38.123 14:07:40 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.123 14:07:40 -- common/autotest_common.sh@884 -- # size=4096 00:12:38.123 14:07:40 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.123 14:07:40 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:38.123 14:07:40 -- common/autotest_common.sh@887 -- # return 0 00:12:38.123 14:07:40 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:38.123 14:07:40 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:38.123 14:07:40 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:12:38.384 14:07:41 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:12:38.384 14:07:41 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:12:38.384 14:07:41 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:12:38.384 14:07:41 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:12:38.384 14:07:41 -- common/autotest_common.sh@867 -- # local i 00:12:38.384 14:07:41 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:38.384 14:07:41 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:38.384 14:07:41 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:12:38.384 14:07:41 -- common/autotest_common.sh@871 -- # break 00:12:38.384 14:07:41 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:38.384 14:07:41 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:38.384 14:07:41 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:38.384 1+0 records in 00:12:38.384 1+0 records out 00:12:38.384 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00111307 s, 3.7 MB/s 00:12:38.384 14:07:41 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.384 14:07:41 -- common/autotest_common.sh@884 -- # size=4096 00:12:38.384 14:07:41 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.384 14:07:41 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:38.384 14:07:41 -- common/autotest_common.sh@887 -- # return 0 00:12:38.384 14:07:41 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:38.384 14:07:41 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:38.384 14:07:41 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:38.646 14:07:41 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:12:38.646 { 00:12:38.646 "nbd_device": "/dev/nbd0", 00:12:38.646 "bdev_name": "nvme0n1" 00:12:38.646 }, 00:12:38.646 { 00:12:38.646 "nbd_device": "/dev/nbd1", 00:12:38.646 "bdev_name": "nvme1n1" 00:12:38.646 }, 00:12:38.646 { 00:12:38.646 "nbd_device": "/dev/nbd2", 00:12:38.646 "bdev_name": "nvme1n2" 00:12:38.646 }, 00:12:38.646 { 00:12:38.646 "nbd_device": "/dev/nbd3", 00:12:38.646 "bdev_name": "nvme1n3" 00:12:38.646 }, 00:12:38.646 { 00:12:38.646 "nbd_device": "/dev/nbd4", 00:12:38.646 "bdev_name": "nvme2n1" 00:12:38.646 }, 00:12:38.646 { 00:12:38.646 "nbd_device": "/dev/nbd5", 00:12:38.646 "bdev_name": "nvme3n1" 00:12:38.646 } 00:12:38.646 ]' 00:12:38.646 14:07:41 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:12:38.646 14:07:41 -- bdev/nbd_common.sh@119 -- # echo '[ 00:12:38.646 { 00:12:38.646 "nbd_device": "/dev/nbd0", 00:12:38.646 "bdev_name": "nvme0n1" 00:12:38.646 }, 00:12:38.646 { 00:12:38.646 "nbd_device": "/dev/nbd1", 00:12:38.646 "bdev_name": "nvme1n1" 00:12:38.646 }, 00:12:38.646 { 00:12:38.646 "nbd_device": "/dev/nbd2", 00:12:38.646 "bdev_name": "nvme1n2" 00:12:38.646 }, 00:12:38.646 { 00:12:38.646 "nbd_device": "/dev/nbd3", 00:12:38.646 "bdev_name": "nvme1n3" 00:12:38.646 }, 00:12:38.646 { 00:12:38.646 "nbd_device": "/dev/nbd4", 00:12:38.646 "bdev_name": "nvme2n1" 00:12:38.646 }, 00:12:38.646 { 00:12:38.646 "nbd_device": "/dev/nbd5", 00:12:38.646 "bdev_name": "nvme3n1" 00:12:38.646 } 00:12:38.646 ]' 00:12:38.646 14:07:41 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:12:38.646 14:07:41 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:12:38.646 14:07:41 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:38.646 14:07:41 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:12:38.646 14:07:41 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:38.646 14:07:41 -- bdev/nbd_common.sh@51 -- # local i 00:12:38.646 14:07:41 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:38.646 14:07:41 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:38.907 14:07:41 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:38.907 14:07:41 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:38.907 14:07:41 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:38.907 14:07:41 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:38.907 14:07:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:38.907 14:07:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:38.907 14:07:41 -- bdev/nbd_common.sh@41 -- # break 00:12:38.907 14:07:41 -- bdev/nbd_common.sh@45 -- # return 0 00:12:38.907 14:07:41 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:38.907 14:07:41 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:39.169 14:07:41 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:39.169 14:07:41 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:39.169 14:07:41 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:39.169 14:07:41 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:39.169 14:07:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:39.169 14:07:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:39.169 14:07:41 -- bdev/nbd_common.sh@41 -- # break 00:12:39.169 14:07:41 -- bdev/nbd_common.sh@45 -- # return 0 00:12:39.169 14:07:41 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:39.169 14:07:41 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:39.169 14:07:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:39.169 14:07:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:39.169 14:07:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:39.169 14:07:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:39.169 14:07:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:39.169 14:07:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:39.169 14:07:42 -- bdev/nbd_common.sh@41 -- # break 00:12:39.169 14:07:42 -- bdev/nbd_common.sh@45 -- # return 0 00:12:39.169 14:07:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:39.169 14:07:42 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:39.430 14:07:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:39.430 14:07:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:39.430 14:07:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:39.430 14:07:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:39.430 14:07:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:39.430 14:07:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:39.430 14:07:42 -- bdev/nbd_common.sh@41 -- # break 00:12:39.430 14:07:42 -- bdev/nbd_common.sh@45 -- # return 0 00:12:39.430 14:07:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:39.430 14:07:42 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:39.690 14:07:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:39.690 14:07:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:39.690 14:07:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:39.690 14:07:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:39.690 14:07:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:39.690 14:07:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:39.690 14:07:42 -- bdev/nbd_common.sh@41 -- # break 00:12:39.690 14:07:42 -- bdev/nbd_common.sh@45 -- # return 0 00:12:39.690 14:07:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:39.690 14:07:42 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:39.952 14:07:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:39.952 14:07:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:39.952 14:07:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:39.952 14:07:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:39.952 14:07:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:39.952 14:07:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:39.952 14:07:42 -- bdev/nbd_common.sh@41 -- # break 00:12:39.952 14:07:42 -- bdev/nbd_common.sh@45 -- # return 0 00:12:39.952 14:07:42 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:39.952 14:07:42 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:39.953 14:07:42 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:40.213 14:07:42 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:40.213 14:07:42 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:40.213 14:07:42 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:40.213 14:07:42 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:40.213 14:07:42 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:40.213 14:07:42 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:40.213 14:07:42 -- bdev/nbd_common.sh@65 -- # true 00:12:40.213 14:07:42 -- bdev/nbd_common.sh@65 -- # count=0 00:12:40.213 14:07:42 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:40.213 14:07:42 -- bdev/nbd_common.sh@122 -- # count=0 00:12:40.213 14:07:42 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:40.213 14:07:42 -- bdev/nbd_common.sh@127 -- # return 0 00:12:40.214 14:07:42 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:40.214 14:07:42 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:40.214 14:07:42 -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:40.214 14:07:42 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:40.214 14:07:42 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:40.214 14:07:42 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:40.214 14:07:42 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:40.214 14:07:42 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:40.214 14:07:42 -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:40.214 14:07:42 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:40.214 14:07:42 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:40.214 14:07:42 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:40.214 14:07:42 -- bdev/nbd_common.sh@12 -- # local i 00:12:40.214 14:07:42 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:40.214 14:07:42 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:40.214 14:07:42 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:12:40.474 /dev/nbd0 00:12:40.474 14:07:43 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:40.474 14:07:43 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:40.474 14:07:43 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:12:40.474 14:07:43 -- common/autotest_common.sh@867 -- # local i 00:12:40.474 14:07:43 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:40.474 14:07:43 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:40.474 14:07:43 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:12:40.474 14:07:43 -- common/autotest_common.sh@871 -- # break 00:12:40.474 14:07:43 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:40.474 14:07:43 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:40.474 14:07:43 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:40.474 1+0 records in 00:12:40.474 1+0 records out 00:12:40.474 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00199462 s, 2.1 MB/s 00:12:40.474 14:07:43 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:40.474 14:07:43 -- common/autotest_common.sh@884 -- # size=4096 00:12:40.474 14:07:43 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:40.474 14:07:43 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:40.474 14:07:43 -- common/autotest_common.sh@887 -- # return 0 00:12:40.474 14:07:43 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:40.474 14:07:43 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:40.474 14:07:43 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:12:40.474 /dev/nbd1 00:12:40.474 14:07:43 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:40.474 14:07:43 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:40.474 14:07:43 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:12:40.474 14:07:43 -- common/autotest_common.sh@867 -- # local i 00:12:40.474 14:07:43 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:40.474 14:07:43 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:40.474 14:07:43 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:12:40.474 14:07:43 -- common/autotest_common.sh@871 -- # break 00:12:40.474 14:07:43 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:40.474 14:07:43 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:40.474 14:07:43 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:40.734 1+0 records in 00:12:40.734 1+0 records out 00:12:40.734 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000948691 s, 4.3 MB/s 00:12:40.734 14:07:43 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:40.734 14:07:43 -- common/autotest_common.sh@884 -- # size=4096 00:12:40.734 14:07:43 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:40.734 14:07:43 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:40.734 14:07:43 -- common/autotest_common.sh@887 -- # return 0 00:12:40.734 14:07:43 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:40.734 14:07:43 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:40.734 14:07:43 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 /dev/nbd10 00:12:40.734 /dev/nbd10 00:12:40.734 14:07:43 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:12:40.734 14:07:43 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:12:40.734 14:07:43 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:12:40.734 14:07:43 -- common/autotest_common.sh@867 -- # local i 00:12:40.734 14:07:43 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:40.734 14:07:43 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:40.734 14:07:43 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:12:40.734 14:07:43 -- common/autotest_common.sh@871 -- # break 00:12:40.734 14:07:43 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:40.734 14:07:43 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:40.734 14:07:43 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:40.734 1+0 records in 00:12:40.734 1+0 records out 00:12:40.734 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000905798 s, 4.5 MB/s 00:12:40.734 14:07:43 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:40.734 14:07:43 -- common/autotest_common.sh@884 -- # size=4096 00:12:40.734 14:07:43 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:40.734 14:07:43 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:40.734 14:07:43 -- common/autotest_common.sh@887 -- # return 0 00:12:40.734 14:07:43 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:40.734 14:07:43 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:40.734 14:07:43 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 /dev/nbd11 00:12:40.995 /dev/nbd11 00:12:40.995 14:07:43 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:12:40.995 14:07:43 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:12:40.995 14:07:43 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:12:40.995 14:07:43 -- common/autotest_common.sh@867 -- # local i 00:12:40.995 14:07:43 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:40.995 14:07:43 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:40.995 14:07:43 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:12:40.995 14:07:43 -- common/autotest_common.sh@871 -- # break 00:12:40.995 14:07:43 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:40.995 14:07:43 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:40.995 14:07:43 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:40.995 1+0 records in 00:12:40.995 1+0 records out 00:12:40.995 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00100105 s, 4.1 MB/s 00:12:40.995 14:07:43 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:40.995 14:07:43 -- common/autotest_common.sh@884 -- # size=4096 00:12:40.995 14:07:43 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:40.995 14:07:43 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:40.995 14:07:43 -- common/autotest_common.sh@887 -- # return 0 00:12:40.995 14:07:43 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:40.995 14:07:43 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:40.995 14:07:43 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:12:41.256 /dev/nbd12 00:12:41.256 14:07:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:12:41.256 14:07:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:12:41.256 14:07:44 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:12:41.256 14:07:44 -- common/autotest_common.sh@867 -- # local i 00:12:41.256 14:07:44 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:41.256 14:07:44 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:41.256 14:07:44 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:12:41.256 14:07:44 -- common/autotest_common.sh@871 -- # break 00:12:41.256 14:07:44 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:41.256 14:07:44 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:41.256 14:07:44 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:41.256 1+0 records in 00:12:41.256 1+0 records out 00:12:41.256 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0011028 s, 3.7 MB/s 00:12:41.256 14:07:44 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.256 14:07:44 -- common/autotest_common.sh@884 -- # size=4096 00:12:41.256 14:07:44 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.256 14:07:44 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:41.256 14:07:44 -- common/autotest_common.sh@887 -- # return 0 00:12:41.256 14:07:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:41.256 14:07:44 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:41.256 14:07:44 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:12:41.516 /dev/nbd13 00:12:41.516 14:07:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:12:41.516 14:07:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:12:41.516 14:07:44 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:12:41.516 14:07:44 -- common/autotest_common.sh@867 -- # local i 00:12:41.516 14:07:44 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:41.516 14:07:44 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:41.516 14:07:44 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:12:41.516 14:07:44 -- common/autotest_common.sh@871 -- # break 00:12:41.516 14:07:44 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:41.516 14:07:44 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:41.516 14:07:44 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:41.516 1+0 records in 00:12:41.516 1+0 records out 00:12:41.516 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000988961 s, 4.1 MB/s 00:12:41.516 14:07:44 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.516 14:07:44 -- common/autotest_common.sh@884 -- # size=4096 00:12:41.516 14:07:44 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.516 14:07:44 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:41.516 14:07:44 -- common/autotest_common.sh@887 -- # return 0 00:12:41.516 14:07:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:41.516 14:07:44 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:41.516 14:07:44 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:41.516 14:07:44 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:41.516 14:07:44 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:41.776 14:07:44 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:41.776 { 00:12:41.776 "nbd_device": "/dev/nbd0", 00:12:41.777 "bdev_name": "nvme0n1" 00:12:41.777 }, 00:12:41.777 { 00:12:41.777 "nbd_device": "/dev/nbd1", 00:12:41.777 "bdev_name": "nvme1n1" 00:12:41.777 }, 00:12:41.777 { 00:12:41.777 "nbd_device": "/dev/nbd10", 00:12:41.777 "bdev_name": "nvme1n2" 00:12:41.777 }, 00:12:41.777 { 00:12:41.777 "nbd_device": "/dev/nbd11", 00:12:41.777 "bdev_name": "nvme1n3" 00:12:41.777 }, 00:12:41.777 { 00:12:41.777 "nbd_device": "/dev/nbd12", 00:12:41.777 "bdev_name": "nvme2n1" 00:12:41.777 }, 00:12:41.777 { 00:12:41.777 "nbd_device": "/dev/nbd13", 00:12:41.777 "bdev_name": "nvme3n1" 00:12:41.777 } 00:12:41.777 ]' 00:12:41.777 14:07:44 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:41.777 14:07:44 -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:41.777 { 00:12:41.777 "nbd_device": "/dev/nbd0", 00:12:41.777 "bdev_name": "nvme0n1" 00:12:41.777 }, 00:12:41.777 { 00:12:41.777 "nbd_device": "/dev/nbd1", 00:12:41.777 "bdev_name": "nvme1n1" 00:12:41.777 }, 00:12:41.777 { 00:12:41.777 "nbd_device": "/dev/nbd10", 00:12:41.777 "bdev_name": "nvme1n2" 00:12:41.777 }, 00:12:41.777 { 00:12:41.777 "nbd_device": "/dev/nbd11", 00:12:41.777 "bdev_name": "nvme1n3" 00:12:41.777 }, 00:12:41.777 { 00:12:41.777 "nbd_device": "/dev/nbd12", 00:12:41.777 "bdev_name": "nvme2n1" 00:12:41.777 }, 00:12:41.777 { 00:12:41.777 "nbd_device": "/dev/nbd13", 00:12:41.777 "bdev_name": "nvme3n1" 00:12:41.777 } 00:12:41.777 ]' 00:12:41.777 14:07:44 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:41.777 /dev/nbd1 00:12:41.777 /dev/nbd10 00:12:41.777 /dev/nbd11 00:12:41.777 /dev/nbd12 00:12:41.777 /dev/nbd13' 00:12:41.777 14:07:44 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:41.777 /dev/nbd1 00:12:41.777 /dev/nbd10 00:12:41.777 /dev/nbd11 00:12:41.777 /dev/nbd12 00:12:41.777 /dev/nbd13' 00:12:41.777 14:07:44 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:41.777 14:07:44 -- bdev/nbd_common.sh@65 -- # count=6 00:12:41.777 14:07:44 -- bdev/nbd_common.sh@66 -- # echo 6 00:12:41.777 14:07:44 -- bdev/nbd_common.sh@95 -- # count=6 00:12:41.777 14:07:44 -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:12:41.777 14:07:44 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:12:41.777 14:07:44 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:41.777 14:07:44 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:41.777 14:07:44 -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:41.777 14:07:44 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:41.777 14:07:44 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:41.777 14:07:44 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:12:41.777 256+0 records in 00:12:41.777 256+0 records out 00:12:41.777 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00994119 s, 105 MB/s 00:12:41.777 14:07:44 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:41.777 14:07:44 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:42.038 256+0 records in 00:12:42.038 256+0 records out 00:12:42.038 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.238781 s, 4.4 MB/s 00:12:42.038 14:07:44 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:42.038 14:07:44 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:42.298 256+0 records in 00:12:42.298 256+0 records out 00:12:42.298 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.252645 s, 4.2 MB/s 00:12:42.298 14:07:45 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:42.298 14:07:45 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:12:42.559 256+0 records in 00:12:42.559 256+0 records out 00:12:42.559 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.201756 s, 5.2 MB/s 00:12:42.559 14:07:45 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:42.559 14:07:45 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:12:42.820 256+0 records in 00:12:42.820 256+0 records out 00:12:42.820 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.245212 s, 4.3 MB/s 00:12:42.820 14:07:45 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:42.820 14:07:45 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:12:43.081 256+0 records in 00:12:43.081 256+0 records out 00:12:43.081 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.309954 s, 3.4 MB/s 00:12:43.081 14:07:45 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:43.081 14:07:45 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:12:43.343 256+0 records in 00:12:43.343 256+0 records out 00:12:43.343 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.247465 s, 4.2 MB/s 00:12:43.343 14:07:46 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:12:43.343 14:07:46 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:43.343 14:07:46 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:43.343 14:07:46 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:43.343 14:07:46 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:43.343 14:07:46 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:43.343 14:07:46 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:43.343 14:07:46 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:43.343 14:07:46 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:12:43.343 14:07:46 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:43.343 14:07:46 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:12:43.343 14:07:46 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:43.343 14:07:46 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:12:43.343 14:07:46 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:43.343 14:07:46 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:12:43.343 14:07:46 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:43.343 14:07:46 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:12:43.343 14:07:46 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:43.343 14:07:46 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:12:43.343 14:07:46 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:43.343 14:07:46 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:43.343 14:07:46 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:43.343 14:07:46 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:43.343 14:07:46 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:43.343 14:07:46 -- bdev/nbd_common.sh@51 -- # local i 00:12:43.343 14:07:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:43.343 14:07:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:43.604 14:07:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:43.604 14:07:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:43.604 14:07:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:43.604 14:07:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:43.604 14:07:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:43.604 14:07:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:43.604 14:07:46 -- bdev/nbd_common.sh@41 -- # break 00:12:43.604 14:07:46 -- bdev/nbd_common.sh@45 -- # return 0 00:12:43.604 14:07:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:43.604 14:07:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:43.866 14:07:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:43.866 14:07:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:43.866 14:07:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:43.866 14:07:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:43.866 14:07:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:43.866 14:07:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:43.866 14:07:46 -- bdev/nbd_common.sh@41 -- # break 00:12:43.866 14:07:46 -- bdev/nbd_common.sh@45 -- # return 0 00:12:43.866 14:07:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:43.866 14:07:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:43.866 14:07:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:43.866 14:07:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:43.866 14:07:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:43.866 14:07:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:43.866 14:07:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:43.866 14:07:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:43.866 14:07:46 -- bdev/nbd_common.sh@41 -- # break 00:12:43.866 14:07:46 -- bdev/nbd_common.sh@45 -- # return 0 00:12:43.866 14:07:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:43.866 14:07:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:44.127 14:07:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:44.127 14:07:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:44.127 14:07:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:44.127 14:07:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:44.127 14:07:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:44.127 14:07:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:44.127 14:07:46 -- bdev/nbd_common.sh@41 -- # break 00:12:44.127 14:07:46 -- bdev/nbd_common.sh@45 -- # return 0 00:12:44.127 14:07:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:44.127 14:07:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:44.388 14:07:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:44.388 14:07:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:44.388 14:07:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:44.388 14:07:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:44.388 14:07:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:44.388 14:07:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:44.388 14:07:47 -- bdev/nbd_common.sh@41 -- # break 00:12:44.388 14:07:47 -- bdev/nbd_common.sh@45 -- # return 0 00:12:44.388 14:07:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:44.388 14:07:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:44.649 14:07:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:44.649 14:07:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:44.649 14:07:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:44.649 14:07:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:44.649 14:07:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:44.650 14:07:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:44.650 14:07:47 -- bdev/nbd_common.sh@41 -- # break 00:12:44.650 14:07:47 -- bdev/nbd_common.sh@45 -- # return 0 00:12:44.650 14:07:47 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:44.650 14:07:47 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:44.650 14:07:47 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:44.910 14:07:47 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:44.910 14:07:47 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:44.910 14:07:47 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:44.910 14:07:47 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:44.910 14:07:47 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:44.910 14:07:47 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:44.910 14:07:47 -- bdev/nbd_common.sh@65 -- # true 00:12:44.910 14:07:47 -- bdev/nbd_common.sh@65 -- # count=0 00:12:44.910 14:07:47 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:44.910 14:07:47 -- bdev/nbd_common.sh@104 -- # count=0 00:12:44.910 14:07:47 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:44.910 14:07:47 -- bdev/nbd_common.sh@109 -- # return 0 00:12:44.910 14:07:47 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:44.911 14:07:47 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:44.911 14:07:47 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:44.911 14:07:47 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:12:44.911 14:07:47 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:12:44.911 14:07:47 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:12:44.911 malloc_lvol_verify 00:12:45.172 14:07:47 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:12:45.172 9cbd2d3f-a532-4052-9b86-e0d342addf4c 00:12:45.172 14:07:48 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:12:45.433 a2300e7d-7ed7-4d0f-b6e3-491bb0e6d93c 00:12:45.433 14:07:48 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:12:45.696 /dev/nbd0 00:12:45.696 14:07:48 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:12:45.696 mke2fs 1.47.0 (5-Feb-2023) 00:12:45.696 Discarding device blocks: 0/4096 done 00:12:45.696 Creating filesystem with 4096 1k blocks and 1024 inodes 00:12:45.696 00:12:45.696 Allocating group tables: 0/1 done 00:12:45.696 Writing inode tables: 0/1 done 00:12:45.696 Creating journal (1024 blocks): done 00:12:45.696 Writing superblocks and filesystem accounting information: 0/1 done 00:12:45.696 00:12:45.696 14:07:48 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:12:45.696 14:07:48 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:45.696 14:07:48 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:45.696 14:07:48 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:45.696 14:07:48 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:45.696 14:07:48 -- bdev/nbd_common.sh@51 -- # local i 00:12:45.696 14:07:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:45.696 14:07:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:45.696 14:07:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:45.696 14:07:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:45.696 14:07:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:45.696 14:07:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:45.696 14:07:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:45.696 14:07:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:45.696 14:07:48 -- bdev/nbd_common.sh@41 -- # break 00:12:45.696 14:07:48 -- bdev/nbd_common.sh@45 -- # return 0 00:12:45.696 14:07:48 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:12:45.696 14:07:48 -- bdev/nbd_common.sh@147 -- # return 0 00:12:45.696 14:07:48 -- bdev/blockdev.sh@324 -- # killprocess 67858 00:12:45.696 14:07:48 -- common/autotest_common.sh@936 -- # '[' -z 67858 ']' 00:12:45.696 14:07:48 -- common/autotest_common.sh@940 -- # kill -0 67858 00:12:45.696 14:07:48 -- common/autotest_common.sh@941 -- # uname 00:12:45.696 14:07:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:45.696 14:07:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67858 00:12:45.958 14:07:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:45.958 14:07:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:45.958 killing process with pid 67858 00:12:45.958 14:07:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67858' 00:12:45.958 14:07:48 -- common/autotest_common.sh@955 -- # kill 67858 00:12:45.958 14:07:48 -- common/autotest_common.sh@960 -- # wait 67858 00:12:46.906 14:07:49 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:12:46.906 00:12:46.906 real 0m11.903s 00:12:46.906 user 0m15.168s 00:12:46.906 sys 0m3.936s 00:12:46.906 ************************************ 00:12:46.906 END TEST bdev_nbd 00:12:46.906 ************************************ 00:12:46.906 14:07:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:46.906 14:07:49 -- common/autotest_common.sh@10 -- # set +x 00:12:46.906 14:07:49 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:12:46.906 14:07:49 -- bdev/blockdev.sh@762 -- # '[' xnvme = nvme ']' 00:12:46.906 14:07:49 -- bdev/blockdev.sh@762 -- # '[' xnvme = gpt ']' 00:12:46.906 14:07:49 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:12:46.906 14:07:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:46.906 14:07:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:46.906 14:07:49 -- common/autotest_common.sh@10 -- # set +x 00:12:46.906 ************************************ 00:12:46.906 START TEST bdev_fio 00:12:46.906 ************************************ 00:12:46.906 14:07:49 -- common/autotest_common.sh@1114 -- # fio_test_suite '' 00:12:46.906 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:12:46.906 14:07:49 -- bdev/blockdev.sh@329 -- # local env_context 00:12:46.906 14:07:49 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:12:46.906 14:07:49 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:12:46.906 14:07:49 -- bdev/blockdev.sh@337 -- # echo '' 00:12:46.906 14:07:49 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:12:46.906 14:07:49 -- bdev/blockdev.sh@337 -- # env_context= 00:12:46.906 14:07:49 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:12:46.906 14:07:49 -- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:46.906 14:07:49 -- common/autotest_common.sh@1270 -- # local workload=verify 00:12:46.906 14:07:49 -- common/autotest_common.sh@1271 -- # local bdev_type=AIO 00:12:46.906 14:07:49 -- common/autotest_common.sh@1272 -- # local env_context= 00:12:46.906 14:07:49 -- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio 00:12:46.906 14:07:49 -- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:46.906 14:07:49 -- common/autotest_common.sh@1280 -- # '[' -z verify ']' 00:12:46.906 14:07:49 -- common/autotest_common.sh@1284 -- # '[' -n '' ']' 00:12:46.906 14:07:49 -- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:46.906 14:07:49 -- common/autotest_common.sh@1290 -- # cat 00:12:46.906 14:07:49 -- common/autotest_common.sh@1302 -- # '[' verify == verify ']' 00:12:46.906 14:07:49 -- common/autotest_common.sh@1303 -- # cat 00:12:46.906 14:07:49 -- common/autotest_common.sh@1312 -- # '[' AIO == AIO ']' 00:12:46.906 14:07:49 -- common/autotest_common.sh@1313 -- # /usr/src/fio/fio --version 00:12:46.906 14:07:49 -- common/autotest_common.sh@1313 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:12:46.906 14:07:49 -- common/autotest_common.sh@1314 -- # echo serialize_overlap=1 00:12:46.906 14:07:49 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:46.906 14:07:49 -- bdev/blockdev.sh@340 -- # echo '[job_nvme0n1]' 00:12:46.906 14:07:49 -- bdev/blockdev.sh@341 -- # echo filename=nvme0n1 00:12:46.906 14:07:49 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:46.906 14:07:49 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n1]' 00:12:46.906 14:07:49 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n1 00:12:46.906 14:07:49 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:46.906 14:07:49 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n2]' 00:12:46.906 14:07:49 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n2 00:12:46.906 14:07:49 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:46.906 14:07:49 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n3]' 00:12:46.906 14:07:49 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n3 00:12:46.906 14:07:49 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:46.906 14:07:49 -- bdev/blockdev.sh@340 -- # echo '[job_nvme2n1]' 00:12:46.906 14:07:49 -- bdev/blockdev.sh@341 -- # echo filename=nvme2n1 00:12:46.906 14:07:49 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:46.906 14:07:49 -- bdev/blockdev.sh@340 -- # echo '[job_nvme3n1]' 00:12:46.906 14:07:49 -- bdev/blockdev.sh@341 -- # echo filename=nvme3n1 00:12:46.906 14:07:49 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:12:46.906 14:07:49 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:46.906 14:07:49 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:12:46.906 14:07:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:46.906 14:07:49 -- common/autotest_common.sh@10 -- # set +x 00:12:46.906 ************************************ 00:12:46.906 START TEST bdev_fio_rw_verify 00:12:46.906 ************************************ 00:12:46.906 14:07:49 -- common/autotest_common.sh@1114 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:46.906 14:07:49 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:46.906 14:07:49 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:12:46.906 14:07:49 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:46.906 14:07:49 -- common/autotest_common.sh@1328 -- # local sanitizers 00:12:46.906 14:07:49 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:46.906 14:07:49 -- common/autotest_common.sh@1330 -- # shift 00:12:46.906 14:07:49 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:12:46.906 14:07:49 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:12:46.906 14:07:49 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:46.906 14:07:49 -- common/autotest_common.sh@1334 -- # grep libasan 00:12:46.906 14:07:49 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:12:46.906 14:07:49 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:46.906 14:07:49 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:46.906 14:07:49 -- common/autotest_common.sh@1336 -- # break 00:12:46.906 14:07:49 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:46.906 14:07:49 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:47.168 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:47.168 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:47.168 job_nvme1n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:47.168 job_nvme1n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:47.168 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:47.168 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:47.168 fio-3.35 00:12:47.168 Starting 6 threads 00:12:59.411 00:12:59.411 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=68281: Sun Dec 8 14:08:00 2024 00:12:59.411 read: IOPS=12.9k, BW=50.3MiB/s (52.8MB/s)(503MiB/10003msec) 00:12:59.411 slat (usec): min=2, max=2525, avg= 7.15, stdev=15.59 00:12:59.411 clat (usec): min=88, max=1148.3k, avg=1583.38, stdev=9063.50 00:12:59.411 lat (usec): min=91, max=1148.3k, avg=1590.52, stdev=9063.58 00:12:59.411 clat percentiles (usec): 00:12:59.411 | 50.000th=[ 1418], 99.000th=[ 3916], 99.900th=[ 5407], 00:12:59.411 | 99.990th=[ 8160], 99.999th=[1149240] 00:12:59.411 write: IOPS=13.2k, BW=51.7MiB/s (54.2MB/s)(517MiB/10003msec); 0 zone resets 00:12:59.411 slat (usec): min=12, max=3976, avg=41.51, stdev=137.75 00:12:59.411 clat (usec): min=114, max=6824, avg=1752.51, stdev=831.43 00:12:59.411 lat (usec): min=131, max=7345, avg=1794.03, stdev=843.68 00:12:59.411 clat percentiles (usec): 00:12:59.411 | 50.000th=[ 1631], 99.000th=[ 4293], 99.900th=[ 5669], 99.990th=[ 6390], 00:12:59.411 | 99.999th=[ 6783] 00:12:59.411 bw ( KiB/s): min=47957, max=67432, per=100.00%, avg=53913.25, stdev=954.28, samples=112 00:12:59.411 iops : min=11987, max=16858, avg=13477.12, stdev=238.63, samples=112 00:12:59.411 lat (usec) : 100=0.01%, 250=0.80%, 500=4.00%, 750=6.71%, 1000=9.79% 00:12:59.411 lat (msec) : 2=51.67%, 4=25.76%, 10=1.27%, 2000=0.01% 00:12:59.411 cpu : usr=49.53%, sys=29.05%, ctx=5090, majf=0, minf=15409 00:12:59.411 IO depths : 1=11.4%, 2=23.8%, 4=51.1%, 8=13.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:59.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:59.411 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:59.411 issued rwts: total=128889,132291,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:59.411 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:59.411 00:12:59.411 Run status group 0 (all jobs): 00:12:59.411 READ: bw=50.3MiB/s (52.8MB/s), 50.3MiB/s-50.3MiB/s (52.8MB/s-52.8MB/s), io=503MiB (528MB), run=10003-10003msec 00:12:59.411 WRITE: bw=51.7MiB/s (54.2MB/s), 51.7MiB/s-51.7MiB/s (54.2MB/s-54.2MB/s), io=517MiB (542MB), run=10003-10003msec 00:12:59.411 ----------------------------------------------------- 00:12:59.411 Suppressions used: 00:12:59.411 count bytes template 00:12:59.411 6 48 /usr/src/fio/parse.c 00:12:59.411 3329 319584 /usr/src/fio/iolog.c 00:12:59.411 1 8 libtcmalloc_minimal.so 00:12:59.411 1 904 libcrypto.so 00:12:59.411 ----------------------------------------------------- 00:12:59.411 00:12:59.411 00:12:59.411 real 0m12.038s 00:12:59.411 user 0m31.364s 00:12:59.411 sys 0m17.849s 00:12:59.411 14:08:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:59.411 14:08:01 -- common/autotest_common.sh@10 -- # set +x 00:12:59.411 ************************************ 00:12:59.411 END TEST bdev_fio_rw_verify 00:12:59.411 ************************************ 00:12:59.411 14:08:01 -- bdev/blockdev.sh@348 -- # rm -f 00:12:59.411 14:08:01 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:59.411 14:08:01 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:12:59.411 14:08:01 -- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:59.411 14:08:01 -- common/autotest_common.sh@1270 -- # local workload=trim 00:12:59.411 14:08:01 -- common/autotest_common.sh@1271 -- # local bdev_type= 00:12:59.411 14:08:01 -- common/autotest_common.sh@1272 -- # local env_context= 00:12:59.411 14:08:01 -- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio 00:12:59.411 14:08:01 -- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:59.411 14:08:01 -- common/autotest_common.sh@1280 -- # '[' -z trim ']' 00:12:59.411 14:08:01 -- common/autotest_common.sh@1284 -- # '[' -n '' ']' 00:12:59.411 14:08:01 -- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:59.411 14:08:01 -- common/autotest_common.sh@1290 -- # cat 00:12:59.411 14:08:01 -- common/autotest_common.sh@1302 -- # '[' trim == verify ']' 00:12:59.411 14:08:01 -- common/autotest_common.sh@1317 -- # '[' trim == trim ']' 00:12:59.411 14:08:01 -- common/autotest_common.sh@1318 -- # echo rw=trimwrite 00:12:59.411 14:08:01 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:12:59.412 14:08:01 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "0afd7646-4c37-4f76-a355-7e4dd0f9ab39"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "0afd7646-4c37-4f76-a355-7e4dd0f9ab39",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "b3332101-3b63-450b-a911-45ff6b54954e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b3332101-3b63-450b-a911-45ff6b54954e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "0c8f58b8-8804-4edd-ac58-0989516f5308"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0c8f58b8-8804-4edd-ac58-0989516f5308",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "f00df6f5-6034-46c6-84bd-fd533749958a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f00df6f5-6034-46c6-84bd-fd533749958a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "9e657d4e-a787-4064-ac56-3d457e8ea88d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "9e657d4e-a787-4064-ac56-3d457e8ea88d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "e374070a-08af-4efa-9ab7-2b93048908fb"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "e374070a-08af-4efa-9ab7-2b93048908fb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:12:59.412 14:08:01 -- bdev/blockdev.sh@353 -- # [[ -n '' ]] 00:12:59.412 14:08:01 -- bdev/blockdev.sh@359 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:59.412 /home/vagrant/spdk_repo/spdk 00:12:59.412 14:08:01 -- bdev/blockdev.sh@360 -- # popd 00:12:59.412 14:08:01 -- bdev/blockdev.sh@361 -- # trap - SIGINT SIGTERM EXIT 00:12:59.412 14:08:01 -- bdev/blockdev.sh@362 -- # return 0 00:12:59.412 00:12:59.412 real 0m12.213s 00:12:59.412 user 0m31.442s 00:12:59.412 sys 0m17.924s 00:12:59.412 14:08:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:59.412 ************************************ 00:12:59.412 END TEST bdev_fio 00:12:59.412 ************************************ 00:12:59.412 14:08:01 -- common/autotest_common.sh@10 -- # set +x 00:12:59.412 14:08:01 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:59.412 14:08:01 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:59.412 14:08:01 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:12:59.412 14:08:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:59.412 14:08:01 -- common/autotest_common.sh@10 -- # set +x 00:12:59.412 ************************************ 00:12:59.412 START TEST bdev_verify 00:12:59.412 ************************************ 00:12:59.412 14:08:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:59.412 [2024-12-08 14:08:01.989186] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:59.412 [2024-12-08 14:08:01.989336] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68455 ] 00:12:59.412 [2024-12-08 14:08:02.143316] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:59.673 [2024-12-08 14:08:02.414268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:59.673 [2024-12-08 14:08:02.414354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.245 Running I/O for 5 seconds... 00:13:05.638 00:13:05.638 Latency(us) 00:13:05.638 [2024-12-08T14:08:08.558Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:05.638 [2024-12-08T14:08:08.558Z] Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:05.638 Verification LBA range: start 0x0 length 0x20000 00:13:05.638 nvme0n1 : 5.09 2068.57 8.08 0.00 0.00 61582.35 16131.94 73400.32 00:13:05.638 [2024-12-08T14:08:08.558Z] Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:05.638 Verification LBA range: start 0x20000 length 0x20000 00:13:05.638 nvme0n1 : 5.10 2079.16 8.12 0.00 0.00 60826.59 5671.38 81062.99 00:13:05.638 [2024-12-08T14:08:08.558Z] Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:05.638 Verification LBA range: start 0x0 length 0x80000 00:13:05.638 nvme1n1 : 5.08 1914.23 7.48 0.00 0.00 66496.11 5721.80 98404.82 00:13:05.638 [2024-12-08T14:08:08.558Z] Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:05.638 Verification LBA range: start 0x80000 length 0x80000 00:13:05.638 nvme1n1 : 5.08 2029.92 7.93 0.00 0.00 62696.94 8065.97 81466.29 00:13:05.638 [2024-12-08T14:08:08.558Z] Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:05.638 Verification LBA range: start 0x0 length 0x80000 00:13:05.638 nvme1n2 : 5.07 2099.02 8.20 0.00 0.00 60621.29 5318.50 98404.82 00:13:05.638 [2024-12-08T14:08:08.558Z] Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:05.638 Verification LBA range: start 0x80000 length 0x80000 00:13:05.638 nvme1n2 : 5.09 2010.46 7.85 0.00 0.00 63344.64 14115.45 90338.86 00:13:05.638 [2024-12-08T14:08:08.558Z] Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:05.638 Verification LBA range: start 0x0 length 0x80000 00:13:05.638 nvme1n3 : 5.08 1919.76 7.50 0.00 0.00 66079.12 13611.32 90338.86 00:13:05.638 [2024-12-08T14:08:08.558Z] Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:05.638 Verification LBA range: start 0x80000 length 0x80000 00:13:05.638 nvme1n3 : 5.09 2026.63 7.92 0.00 0.00 62752.56 4814.38 81869.59 00:13:05.638 [2024-12-08T14:08:08.558Z] Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:05.638 Verification LBA range: start 0x0 length 0xbd0bd 00:13:05.638 nvme2n1 : 5.09 2030.97 7.93 0.00 0.00 62578.32 13913.80 82272.89 00:13:05.638 [2024-12-08T14:08:08.558Z] Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:05.638 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:05.638 nvme2n1 : 5.10 2038.22 7.96 0.00 0.00 62220.05 5545.35 79046.50 00:13:05.638 [2024-12-08T14:08:08.558Z] Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:05.638 Verification LBA range: start 0x0 length 0xa0000 00:13:05.638 nvme3n1 : 5.09 2131.30 8.33 0.00 0.00 59481.99 9477.51 90742.15 00:13:05.638 [2024-12-08T14:08:08.558Z] Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:05.638 Verification LBA range: start 0xa0000 length 0xa0000 00:13:05.638 nvme3n1 : 5.09 2131.09 8.32 0.00 0.00 59407.84 6704.84 87112.47 00:13:05.638 [2024-12-08T14:08:08.558Z] =================================================================================================================== 00:13:05.638 [2024-12-08T14:08:08.558Z] Total : 24479.33 95.62 0.00 0.00 62269.09 4814.38 98404.82 00:13:06.211 00:13:06.211 real 0m6.988s 00:13:06.211 user 0m8.899s 00:13:06.211 sys 0m3.188s 00:13:06.211 14:08:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:06.211 ************************************ 00:13:06.211 14:08:08 -- common/autotest_common.sh@10 -- # set +x 00:13:06.211 END TEST bdev_verify 00:13:06.211 ************************************ 00:13:06.211 14:08:08 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:06.211 14:08:08 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:13:06.211 14:08:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:06.211 14:08:08 -- common/autotest_common.sh@10 -- # set +x 00:13:06.211 ************************************ 00:13:06.211 START TEST bdev_verify_big_io 00:13:06.211 ************************************ 00:13:06.211 14:08:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:06.211 [2024-12-08 14:08:09.027930] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:06.211 [2024-12-08 14:08:09.028068] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68556 ] 00:13:06.472 [2024-12-08 14:08:09.175357] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:06.472 [2024-12-08 14:08:09.350412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:06.472 [2024-12-08 14:08:09.350480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.044 Running I/O for 5 seconds... 00:13:13.645 00:13:13.645 Latency(us) 00:13:13.645 [2024-12-08T14:08:16.565Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:13.645 [2024-12-08T14:08:16.565Z] Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:13.645 Verification LBA range: start 0x0 length 0x2000 00:13:13.645 nvme0n1 : 5.51 268.54 16.78 0.00 0.00 466519.31 108083.99 613013.66 00:13:13.645 [2024-12-08T14:08:16.565Z] Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:13.645 Verification LBA range: start 0x2000 length 0x2000 00:13:13.645 nvme0n1 : 5.53 268.51 16.78 0.00 0.00 453609.40 64124.46 832408.02 00:13:13.645 [2024-12-08T14:08:16.565Z] Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:13.645 Verification LBA range: start 0x0 length 0x8000 00:13:13.645 nvme1n1 : 5.52 252.22 15.76 0.00 0.00 492166.55 14317.10 567844.23 00:13:13.645 [2024-12-08T14:08:16.565Z] Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:13.645 Verification LBA range: start 0x8000 length 0x8000 00:13:13.645 nvme1n1 : 5.53 267.31 16.71 0.00 0.00 454205.48 61301.37 642051.15 00:13:13.645 [2024-12-08T14:08:16.565Z] Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:13.645 Verification LBA range: start 0x0 length 0x8000 00:13:13.645 nvme1n2 : 5.52 204.64 12.79 0.00 0.00 596087.27 107277.39 596881.72 00:13:13.645 [2024-12-08T14:08:16.565Z] Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:13.645 Verification LBA range: start 0x8000 length 0x8000 00:13:13.645 nvme1n2 : 5.58 232.29 14.52 0.00 0.00 512795.08 50613.96 587202.56 00:13:13.645 [2024-12-08T14:08:16.565Z] Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:13.645 Verification LBA range: start 0x0 length 0x8000 00:13:13.645 nvme1n3 : 5.52 236.27 14.77 0.00 0.00 508059.10 62511.26 664635.86 00:13:13.645 [2024-12-08T14:08:16.565Z] Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:13.645 Verification LBA range: start 0x8000 length 0x8000 00:13:13.645 nvme1n3 : 5.59 249.93 15.62 0.00 0.00 469053.01 47790.87 561391.46 00:13:13.645 [2024-12-08T14:08:16.565Z] Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:13.645 Verification LBA range: start 0x0 length 0xbd0b 00:13:13.645 nvme2n1 : 5.53 300.10 18.76 0.00 0.00 394460.23 6200.71 506542.87 00:13:13.645 [2024-12-08T14:08:16.565Z] Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:13.645 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:13.645 nvme2n1 : 5.57 295.12 18.45 0.00 0.00 392014.73 21979.77 535580.36 00:13:13.645 [2024-12-08T14:08:16.565Z] Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:13.645 Verification LBA range: start 0x0 length 0xa000 00:13:13.645 nvme3n1 : 5.53 283.36 17.71 0.00 0.00 411121.32 5999.06 538806.74 00:13:13.645 [2024-12-08T14:08:16.565Z] Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:13.645 Verification LBA range: start 0xa000 length 0xa000 00:13:13.645 nvme3n1 : 5.59 280.20 17.51 0.00 0.00 407733.77 1531.27 567844.23 00:13:13.645 [2024-12-08T14:08:16.565Z] =================================================================================================================== 00:13:13.645 [2024-12-08T14:08:16.565Z] Total : 3138.50 196.16 0.00 0.00 457404.13 1531.27 832408.02 00:13:13.645 00:13:13.645 real 0m7.449s 00:13:13.645 user 0m13.332s 00:13:13.645 sys 0m0.556s 00:13:13.645 14:08:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:13.645 14:08:16 -- common/autotest_common.sh@10 -- # set +x 00:13:13.645 ************************************ 00:13:13.645 END TEST bdev_verify_big_io 00:13:13.645 ************************************ 00:13:13.645 14:08:16 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:13.645 14:08:16 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:13:13.645 14:08:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:13.645 14:08:16 -- common/autotest_common.sh@10 -- # set +x 00:13:13.645 ************************************ 00:13:13.645 START TEST bdev_write_zeroes 00:13:13.645 ************************************ 00:13:13.645 14:08:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:13.645 [2024-12-08 14:08:16.535831] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:13.645 [2024-12-08 14:08:16.536377] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68660 ] 00:13:13.906 [2024-12-08 14:08:16.679569] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:14.167 [2024-12-08 14:08:16.853618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.428 Running I/O for 1 seconds... 00:13:15.372 00:13:15.372 Latency(us) 00:13:15.372 [2024-12-08T14:08:18.292Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:15.372 [2024-12-08T14:08:18.292Z] Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:15.372 nvme0n1 : 1.01 12347.73 48.23 0.00 0.00 10356.50 8418.86 22786.36 00:13:15.372 [2024-12-08T14:08:18.293Z] Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:15.373 nvme1n1 : 1.01 12332.67 48.17 0.00 0.00 10361.94 8469.27 21273.99 00:13:15.373 [2024-12-08T14:08:18.293Z] Job: nvme1n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:15.373 nvme1n2 : 1.01 12318.45 48.12 0.00 0.00 10364.93 8469.27 19761.62 00:13:15.373 [2024-12-08T14:08:18.293Z] Job: nvme1n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:15.373 nvme1n3 : 1.01 12304.45 48.06 0.00 0.00 10368.47 8519.68 18350.08 00:13:15.373 [2024-12-08T14:08:18.293Z] Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:15.373 nvme2n1 : 1.02 13022.63 50.87 0.00 0.00 9789.82 3906.95 17845.96 00:13:15.373 [2024-12-08T14:08:18.293Z] Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:15.373 nvme3n1 : 1.02 12316.21 48.11 0.00 0.00 10305.21 6805.66 24197.91 00:13:15.373 [2024-12-08T14:08:18.293Z] =================================================================================================================== 00:13:15.373 [2024-12-08T14:08:18.293Z] Total : 74642.14 291.57 0.00 0.00 10252.73 3906.95 24197.91 00:13:16.315 00:13:16.316 real 0m2.634s 00:13:16.316 user 0m2.042s 00:13:16.316 sys 0m0.419s 00:13:16.316 14:08:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:16.316 14:08:19 -- common/autotest_common.sh@10 -- # set +x 00:13:16.316 ************************************ 00:13:16.316 END TEST bdev_write_zeroes 00:13:16.316 ************************************ 00:13:16.316 14:08:19 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:16.316 14:08:19 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:13:16.316 14:08:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:16.316 14:08:19 -- common/autotest_common.sh@10 -- # set +x 00:13:16.316 ************************************ 00:13:16.316 START TEST bdev_json_nonenclosed 00:13:16.316 ************************************ 00:13:16.316 14:08:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:16.577 [2024-12-08 14:08:19.256354] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:16.577 [2024-12-08 14:08:19.256484] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68708 ] 00:13:16.577 [2024-12-08 14:08:19.409730] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.837 [2024-12-08 14:08:19.621476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.837 [2024-12-08 14:08:19.621615] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:16.837 [2024-12-08 14:08:19.621638] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:17.098 00:13:17.099 real 0m0.742s 00:13:17.099 user 0m0.502s 00:13:17.099 sys 0m0.131s 00:13:17.099 14:08:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:17.099 14:08:19 -- common/autotest_common.sh@10 -- # set +x 00:13:17.099 ************************************ 00:13:17.099 END TEST bdev_json_nonenclosed 00:13:17.099 ************************************ 00:13:17.099 14:08:19 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:17.099 14:08:19 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:13:17.099 14:08:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:17.099 14:08:19 -- common/autotest_common.sh@10 -- # set +x 00:13:17.099 ************************************ 00:13:17.099 START TEST bdev_json_nonarray 00:13:17.099 ************************************ 00:13:17.099 14:08:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:17.360 [2024-12-08 14:08:20.076404] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:17.360 [2024-12-08 14:08:20.076541] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68739 ] 00:13:17.360 [2024-12-08 14:08:20.230583] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:17.622 [2024-12-08 14:08:20.463099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.622 [2024-12-08 14:08:20.463301] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:17.622 [2024-12-08 14:08:20.463321] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:17.884 00:13:17.884 real 0m0.769s 00:13:17.884 user 0m0.528s 00:13:17.884 sys 0m0.131s 00:13:17.884 14:08:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:17.884 14:08:20 -- common/autotest_common.sh@10 -- # set +x 00:13:17.884 ************************************ 00:13:17.884 END TEST bdev_json_nonarray 00:13:17.884 ************************************ 00:13:18.146 14:08:20 -- bdev/blockdev.sh@785 -- # [[ xnvme == bdev ]] 00:13:18.146 14:08:20 -- bdev/blockdev.sh@792 -- # [[ xnvme == gpt ]] 00:13:18.146 14:08:20 -- bdev/blockdev.sh@796 -- # [[ xnvme == crypto_sw ]] 00:13:18.146 14:08:20 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:13:18.146 14:08:20 -- bdev/blockdev.sh@809 -- # cleanup 00:13:18.146 14:08:20 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:18.146 14:08:20 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:18.146 14:08:20 -- bdev/blockdev.sh@24 -- # [[ xnvme == rbd ]] 00:13:18.146 14:08:20 -- bdev/blockdev.sh@28 -- # [[ xnvme == daos ]] 00:13:18.146 14:08:20 -- bdev/blockdev.sh@32 -- # [[ xnvme = \g\p\t ]] 00:13:18.146 14:08:20 -- bdev/blockdev.sh@38 -- # [[ xnvme == xnvme ]] 00:13:18.146 14:08:20 -- bdev/blockdev.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:19.087 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:21.003 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:13:21.574 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:13:21.574 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:13:21.574 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:13:21.574 00:13:21.574 real 0m59.361s 00:13:21.574 user 1m27.665s 00:13:21.574 sys 0m33.681s 00:13:21.574 14:08:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:21.574 ************************************ 00:13:21.574 END TEST blockdev_xnvme 00:13:21.574 ************************************ 00:13:21.574 14:08:24 -- common/autotest_common.sh@10 -- # set +x 00:13:21.574 14:08:24 -- spdk/autotest.sh@246 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:21.574 14:08:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:21.574 14:08:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:21.574 14:08:24 -- common/autotest_common.sh@10 -- # set +x 00:13:21.574 ************************************ 00:13:21.574 START TEST ublk 00:13:21.574 ************************************ 00:13:21.574 14:08:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:21.838 * Looking for test storage... 00:13:21.838 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:21.839 14:08:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:21.839 14:08:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:21.839 14:08:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:21.839 14:08:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:21.839 14:08:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:21.839 14:08:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:21.839 14:08:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:21.839 14:08:24 -- scripts/common.sh@335 -- # IFS=.-: 00:13:21.839 14:08:24 -- scripts/common.sh@335 -- # read -ra ver1 00:13:21.839 14:08:24 -- scripts/common.sh@336 -- # IFS=.-: 00:13:21.839 14:08:24 -- scripts/common.sh@336 -- # read -ra ver2 00:13:21.839 14:08:24 -- scripts/common.sh@337 -- # local 'op=<' 00:13:21.839 14:08:24 -- scripts/common.sh@339 -- # ver1_l=2 00:13:21.839 14:08:24 -- scripts/common.sh@340 -- # ver2_l=1 00:13:21.839 14:08:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:21.839 14:08:24 -- scripts/common.sh@343 -- # case "$op" in 00:13:21.839 14:08:24 -- scripts/common.sh@344 -- # : 1 00:13:21.839 14:08:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:21.839 14:08:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:21.839 14:08:24 -- scripts/common.sh@364 -- # decimal 1 00:13:21.839 14:08:24 -- scripts/common.sh@352 -- # local d=1 00:13:21.839 14:08:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:21.839 14:08:24 -- scripts/common.sh@354 -- # echo 1 00:13:21.839 14:08:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:21.839 14:08:24 -- scripts/common.sh@365 -- # decimal 2 00:13:21.839 14:08:24 -- scripts/common.sh@352 -- # local d=2 00:13:21.839 14:08:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:21.839 14:08:24 -- scripts/common.sh@354 -- # echo 2 00:13:21.839 14:08:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:21.839 14:08:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:21.839 14:08:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:21.839 14:08:24 -- scripts/common.sh@367 -- # return 0 00:13:21.839 14:08:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:21.839 14:08:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:21.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.839 --rc genhtml_branch_coverage=1 00:13:21.839 --rc genhtml_function_coverage=1 00:13:21.839 --rc genhtml_legend=1 00:13:21.839 --rc geninfo_all_blocks=1 00:13:21.839 --rc geninfo_unexecuted_blocks=1 00:13:21.839 00:13:21.839 ' 00:13:21.839 14:08:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:21.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.839 --rc genhtml_branch_coverage=1 00:13:21.839 --rc genhtml_function_coverage=1 00:13:21.839 --rc genhtml_legend=1 00:13:21.839 --rc geninfo_all_blocks=1 00:13:21.839 --rc geninfo_unexecuted_blocks=1 00:13:21.839 00:13:21.839 ' 00:13:21.839 14:08:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:21.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.839 --rc genhtml_branch_coverage=1 00:13:21.839 --rc genhtml_function_coverage=1 00:13:21.839 --rc genhtml_legend=1 00:13:21.839 --rc geninfo_all_blocks=1 00:13:21.839 --rc geninfo_unexecuted_blocks=1 00:13:21.839 00:13:21.839 ' 00:13:21.839 14:08:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:21.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.839 --rc genhtml_branch_coverage=1 00:13:21.839 --rc genhtml_function_coverage=1 00:13:21.839 --rc genhtml_legend=1 00:13:21.839 --rc geninfo_all_blocks=1 00:13:21.839 --rc geninfo_unexecuted_blocks=1 00:13:21.839 00:13:21.839 ' 00:13:21.839 14:08:24 -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:21.839 14:08:24 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:21.839 14:08:24 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:21.839 14:08:24 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:21.839 14:08:24 -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:21.839 14:08:24 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:21.839 14:08:24 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:21.839 14:08:24 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:21.839 14:08:24 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:21.839 14:08:24 -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:13:21.839 14:08:24 -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:13:21.839 14:08:24 -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:13:21.839 14:08:24 -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:13:21.839 14:08:24 -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:13:21.839 14:08:24 -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:13:21.839 14:08:24 -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:13:21.839 14:08:24 -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:13:21.839 14:08:24 -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:13:21.839 14:08:24 -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:13:21.839 14:08:24 -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:13:21.839 14:08:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:21.839 14:08:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:21.839 14:08:24 -- common/autotest_common.sh@10 -- # set +x 00:13:21.839 ************************************ 00:13:21.839 START TEST test_save_ublk_config 00:13:21.839 ************************************ 00:13:21.839 14:08:24 -- common/autotest_common.sh@1114 -- # test_save_config 00:13:21.839 14:08:24 -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:13:21.839 14:08:24 -- ublk/ublk.sh@103 -- # tgtpid=69049 00:13:21.839 14:08:24 -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:13:21.839 14:08:24 -- ublk/ublk.sh@106 -- # waitforlisten 69049 00:13:21.839 14:08:24 -- common/autotest_common.sh@829 -- # '[' -z 69049 ']' 00:13:21.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.839 14:08:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.839 14:08:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:21.839 14:08:24 -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:13:21.839 14:08:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.839 14:08:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:21.839 14:08:24 -- common/autotest_common.sh@10 -- # set +x 00:13:21.839 [2024-12-08 14:08:24.750443] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:21.839 [2024-12-08 14:08:24.750767] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69049 ] 00:13:22.100 [2024-12-08 14:08:24.901885] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.360 [2024-12-08 14:08:25.180818] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:22.360 [2024-12-08 14:08:25.181091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.746 14:08:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:23.746 14:08:26 -- common/autotest_common.sh@862 -- # return 0 00:13:23.746 14:08:26 -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:13:23.746 14:08:26 -- ublk/ublk.sh@108 -- # rpc_cmd 00:13:23.746 14:08:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.746 14:08:26 -- common/autotest_common.sh@10 -- # set +x 00:13:23.746 [2024-12-08 14:08:26.282902] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:23.746 malloc0 00:13:23.746 [2024-12-08 14:08:26.362154] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:23.746 [2024-12-08 14:08:26.362275] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:23.746 [2024-12-08 14:08:26.362286] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:23.746 [2024-12-08 14:08:26.362297] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:23.746 [2024-12-08 14:08:26.371843] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:23.746 [2024-12-08 14:08:26.371886] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:23.746 [2024-12-08 14:08:26.379003] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:23.746 [2024-12-08 14:08:26.379152] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:23.746 [2024-12-08 14:08:26.396020] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:23.746 0 00:13:23.746 14:08:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.746 14:08:26 -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:13:23.746 14:08:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.746 14:08:26 -- common/autotest_common.sh@10 -- # set +x 00:13:23.746 14:08:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.746 14:08:26 -- ublk/ublk.sh@115 -- # config='{ 00:13:23.746 "subsystems": [ 00:13:23.746 { 00:13:23.746 "subsystem": "iobuf", 00:13:23.746 "config": [ 00:13:23.746 { 00:13:23.746 "method": "iobuf_set_options", 00:13:23.746 "params": { 00:13:23.746 "small_pool_count": 8192, 00:13:23.746 "large_pool_count": 1024, 00:13:23.746 "small_bufsize": 8192, 00:13:23.746 "large_bufsize": 135168 00:13:23.746 } 00:13:23.746 } 00:13:23.746 ] 00:13:23.746 }, 00:13:23.746 { 00:13:23.746 "subsystem": "sock", 00:13:23.746 "config": [ 00:13:23.747 { 00:13:23.747 "method": "sock_impl_set_options", 00:13:23.747 "params": { 00:13:23.747 "impl_name": "posix", 00:13:23.747 "recv_buf_size": 2097152, 00:13:23.747 "send_buf_size": 2097152, 00:13:23.747 "enable_recv_pipe": true, 00:13:23.747 "enable_quickack": false, 00:13:23.747 "enable_placement_id": 0, 00:13:23.747 "enable_zerocopy_send_server": true, 00:13:23.747 "enable_zerocopy_send_client": false, 00:13:23.747 "zerocopy_threshold": 0, 00:13:23.747 "tls_version": 0, 00:13:23.747 "enable_ktls": false 00:13:23.747 } 00:13:23.747 }, 00:13:23.747 { 00:13:23.747 "method": "sock_impl_set_options", 00:13:23.747 "params": { 00:13:23.747 "impl_name": "ssl", 00:13:23.747 "recv_buf_size": 4096, 00:13:23.747 "send_buf_size": 4096, 00:13:23.747 "enable_recv_pipe": true, 00:13:23.747 "enable_quickack": false, 00:13:23.747 "enable_placement_id": 0, 00:13:23.747 "enable_zerocopy_send_server": true, 00:13:23.747 "enable_zerocopy_send_client": false, 00:13:23.747 "zerocopy_threshold": 0, 00:13:23.747 "tls_version": 0, 00:13:23.747 "enable_ktls": false 00:13:23.747 } 00:13:23.747 } 00:13:23.747 ] 00:13:23.747 }, 00:13:23.747 { 00:13:23.747 "subsystem": "vmd", 00:13:23.747 "config": [] 00:13:23.747 }, 00:13:23.747 { 00:13:23.747 "subsystem": "accel", 00:13:23.747 "config": [ 00:13:23.747 { 00:13:23.747 "method": "accel_set_options", 00:13:23.747 "params": { 00:13:23.747 "small_cache_size": 128, 00:13:23.747 "large_cache_size": 16, 00:13:23.747 "task_count": 2048, 00:13:23.747 "sequence_count": 2048, 00:13:23.747 "buf_count": 2048 00:13:23.747 } 00:13:23.747 } 00:13:23.747 ] 00:13:23.747 }, 00:13:23.747 { 00:13:23.747 "subsystem": "bdev", 00:13:23.747 "config": [ 00:13:23.747 { 00:13:23.747 "method": "bdev_set_options", 00:13:23.747 "params": { 00:13:23.747 "bdev_io_pool_size": 65535, 00:13:23.747 "bdev_io_cache_size": 256, 00:13:23.747 "bdev_auto_examine": true, 00:13:23.747 "iobuf_small_cache_size": 128, 00:13:23.747 "iobuf_large_cache_size": 16 00:13:23.747 } 00:13:23.747 }, 00:13:23.747 { 00:13:23.747 "method": "bdev_raid_set_options", 00:13:23.747 "params": { 00:13:23.747 "process_window_size_kb": 1024 00:13:23.747 } 00:13:23.747 }, 00:13:23.747 { 00:13:23.747 "method": "bdev_iscsi_set_options", 00:13:23.747 "params": { 00:13:23.747 "timeout_sec": 30 00:13:23.747 } 00:13:23.747 }, 00:13:23.747 { 00:13:23.747 "method": "bdev_nvme_set_options", 00:13:23.747 "params": { 00:13:23.747 "action_on_timeout": "none", 00:13:23.747 "timeout_us": 0, 00:13:23.747 "timeout_admin_us": 0, 00:13:23.747 "keep_alive_timeout_ms": 10000, 00:13:23.747 "transport_retry_count": 4, 00:13:23.747 "arbitration_burst": 0, 00:13:23.747 "low_priority_weight": 0, 00:13:23.747 "medium_priority_weight": 0, 00:13:23.747 "high_priority_weight": 0, 00:13:23.747 "nvme_adminq_poll_period_us": 10000, 00:13:23.747 "nvme_ioq_poll_period_us": 0, 00:13:23.747 "io_queue_requests": 0, 00:13:23.747 "delay_cmd_submit": true, 00:13:23.747 "bdev_retry_count": 3, 00:13:23.747 "transport_ack_timeout": 0, 00:13:23.747 "ctrlr_loss_timeout_sec": 0, 00:13:23.747 "reconnect_delay_sec": 0, 00:13:23.747 "fast_io_fail_timeout_sec": 0, 00:13:23.747 "generate_uuids": false, 00:13:23.747 "transport_tos": 0, 00:13:23.747 "io_path_stat": false, 00:13:23.747 "allow_accel_sequence": false 00:13:23.747 } 00:13:23.747 }, 00:13:23.747 { 00:13:23.747 "method": "bdev_nvme_set_hotplug", 00:13:23.747 "params": { 00:13:23.747 "period_us": 100000, 00:13:23.747 "enable": false 00:13:23.747 } 00:13:23.747 }, 00:13:23.747 { 00:13:23.747 "method": "bdev_malloc_create", 00:13:23.747 "params": { 00:13:23.747 "name": "malloc0", 00:13:23.747 "num_blocks": 8192, 00:13:23.747 "block_size": 4096, 00:13:23.747 "physical_block_size": 4096, 00:13:23.747 "uuid": "09519c92-4748-432d-9a28-0bcae14f90f1", 00:13:23.747 "optimal_io_boundary": 0 00:13:23.747 } 00:13:23.747 }, 00:13:23.747 { 00:13:23.747 "method": "bdev_wait_for_examine" 00:13:23.747 } 00:13:23.747 ] 00:13:23.747 }, 00:13:23.747 { 00:13:23.747 "subsystem": "scsi", 00:13:23.747 "config": null 00:13:23.747 }, 00:13:23.747 { 00:13:23.747 "subsystem": "scheduler", 00:13:23.747 "config": [ 00:13:23.747 { 00:13:23.747 "method": "framework_set_scheduler", 00:13:23.747 "params": { 00:13:23.747 "name": "static" 00:13:23.747 } 00:13:23.747 } 00:13:23.747 ] 00:13:23.747 }, 00:13:23.747 { 00:13:23.747 "subsystem": "vhost_scsi", 00:13:23.747 "config": [] 00:13:23.747 }, 00:13:23.747 { 00:13:23.747 "subsystem": "vhost_blk", 00:13:23.747 "config": [] 00:13:23.747 }, 00:13:23.747 { 00:13:23.747 "subsystem": "ublk", 00:13:23.747 "config": [ 00:13:23.747 { 00:13:23.747 "method": "ublk_create_target", 00:13:23.747 "params": { 00:13:23.747 "cpumask": "1" 00:13:23.747 } 00:13:23.747 }, 00:13:23.747 { 00:13:23.747 "method": "ublk_start_disk", 00:13:23.747 "params": { 00:13:23.747 "bdev_name": "malloc0", 00:13:23.747 "ublk_id": 0, 00:13:23.747 "num_queues": 1, 00:13:23.747 "queue_depth": 128 00:13:23.747 } 00:13:23.747 } 00:13:23.747 ] 00:13:23.747 }, 00:13:23.747 { 00:13:23.747 "subsystem": "nbd", 00:13:23.747 "config": [] 00:13:23.747 }, 00:13:23.747 { 00:13:23.747 "subsystem": "nvmf", 00:13:23.747 "config": [ 00:13:23.747 { 00:13:23.747 "method": "nvmf_set_config", 00:13:23.747 "params": { 00:13:23.747 "discovery_filter": "match_any", 00:13:23.747 "admin_cmd_passthru": { 00:13:23.747 "identify_ctrlr": false 00:13:23.747 } 00:13:23.747 } 00:13:23.747 }, 00:13:23.747 { 00:13:23.747 "method": "nvmf_set_max_subsystems", 00:13:23.747 "params": { 00:13:23.747 "max_subsystems": 1024 00:13:23.747 } 00:13:23.747 }, 00:13:23.747 { 00:13:23.747 "method": "nvmf_set_crdt", 00:13:23.747 "params": { 00:13:23.747 "crdt1": 0, 00:13:23.747 "crdt2": 0, 00:13:23.747 "crdt3": 0 00:13:23.747 } 00:13:23.747 } 00:13:23.747 ] 00:13:23.747 }, 00:13:23.747 { 00:13:23.747 "subsystem": "iscsi", 00:13:23.747 "config": [ 00:13:23.747 { 00:13:23.747 "method": "iscsi_set_options", 00:13:23.747 "params": { 00:13:23.747 "node_base": "iqn.2016-06.io.spdk", 00:13:23.747 "max_sessions": 128, 00:13:23.747 "max_connections_per_session": 2, 00:13:23.747 "max_queue_depth": 64, 00:13:23.747 "default_time2wait": 2, 00:13:23.747 "default_time2retain": 20, 00:13:23.747 "first_burst_length": 8192, 00:13:23.747 "immediate_data": true, 00:13:23.747 "allow_duplicated_isid": false, 00:13:23.747 "error_recovery_level": 0, 00:13:23.747 "nop_timeout": 60, 00:13:23.747 "nop_in_interval": 30, 00:13:23.747 "disable_chap": false, 00:13:23.747 "require_chap": false, 00:13:23.747 "mutual_chap": false, 00:13:23.747 "chap_group": 0, 00:13:23.747 "max_large_datain_per_connection": 64, 00:13:23.747 "max_r2t_per_connection": 4, 00:13:23.747 "pdu_pool_size": 36864, 00:13:23.747 "immediate_data_pool_size": 16384, 00:13:23.747 "data_out_pool_size": 2048 00:13:23.747 } 00:13:23.747 } 00:13:23.747 ] 00:13:23.747 } 00:13:23.747 ] 00:13:23.747 }' 00:13:23.747 14:08:26 -- ublk/ublk.sh@116 -- # killprocess 69049 00:13:23.747 14:08:26 -- common/autotest_common.sh@936 -- # '[' -z 69049 ']' 00:13:23.747 14:08:26 -- common/autotest_common.sh@940 -- # kill -0 69049 00:13:23.747 14:08:26 -- common/autotest_common.sh@941 -- # uname 00:13:23.747 14:08:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:23.747 14:08:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69049 00:13:24.008 14:08:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:24.008 killing process with pid 69049 00:13:24.008 14:08:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:24.008 14:08:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69049' 00:13:24.008 14:08:26 -- common/autotest_common.sh@955 -- # kill 69049 00:13:24.008 14:08:26 -- common/autotest_common.sh@960 -- # wait 69049 00:13:25.395 [2024-12-08 14:08:27.919130] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:25.395 [2024-12-08 14:08:27.956083] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:25.395 [2024-12-08 14:08:27.956186] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:25.395 [2024-12-08 14:08:27.965013] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:25.395 [2024-12-08 14:08:27.965062] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:25.395 [2024-12-08 14:08:27.965074] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:25.395 [2024-12-08 14:08:27.965099] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:25.395 [2024-12-08 14:08:27.965213] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:26.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:26.337 14:08:29 -- ublk/ublk.sh@119 -- # tgtpid=69111 00:13:26.337 14:08:29 -- ublk/ublk.sh@121 -- # waitforlisten 69111 00:13:26.337 14:08:29 -- common/autotest_common.sh@829 -- # '[' -z 69111 ']' 00:13:26.337 14:08:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:26.337 14:08:29 -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:13:26.337 14:08:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:26.338 14:08:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:26.338 14:08:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:26.338 14:08:29 -- common/autotest_common.sh@10 -- # set +x 00:13:26.338 14:08:29 -- ublk/ublk.sh@118 -- # echo '{ 00:13:26.338 "subsystems": [ 00:13:26.338 { 00:13:26.338 "subsystem": "iobuf", 00:13:26.338 "config": [ 00:13:26.338 { 00:13:26.338 "method": "iobuf_set_options", 00:13:26.338 "params": { 00:13:26.338 "small_pool_count": 8192, 00:13:26.338 "large_pool_count": 1024, 00:13:26.338 "small_bufsize": 8192, 00:13:26.338 "large_bufsize": 135168 00:13:26.338 } 00:13:26.338 } 00:13:26.338 ] 00:13:26.338 }, 00:13:26.338 { 00:13:26.338 "subsystem": "sock", 00:13:26.338 "config": [ 00:13:26.338 { 00:13:26.338 "method": "sock_impl_set_options", 00:13:26.338 "params": { 00:13:26.338 "impl_name": "posix", 00:13:26.338 "recv_buf_size": 2097152, 00:13:26.338 "send_buf_size": 2097152, 00:13:26.338 "enable_recv_pipe": true, 00:13:26.338 "enable_quickack": false, 00:13:26.338 "enable_placement_id": 0, 00:13:26.338 "enable_zerocopy_send_server": true, 00:13:26.338 "enable_zerocopy_send_client": false, 00:13:26.338 "zerocopy_threshold": 0, 00:13:26.338 "tls_version": 0, 00:13:26.338 "enable_ktls": false 00:13:26.338 } 00:13:26.338 }, 00:13:26.338 { 00:13:26.338 "method": "sock_impl_set_options", 00:13:26.338 "params": { 00:13:26.338 "impl_name": "ssl", 00:13:26.338 "recv_buf_size": 4096, 00:13:26.338 "send_buf_size": 4096, 00:13:26.338 "enable_recv_pipe": true, 00:13:26.338 "enable_quickack": false, 00:13:26.338 "enable_placement_id": 0, 00:13:26.338 "enable_zerocopy_send_server": true, 00:13:26.338 "enable_zerocopy_send_client": false, 00:13:26.338 "zerocopy_threshold": 0, 00:13:26.338 "tls_version": 0, 00:13:26.338 "enable_ktls": false 00:13:26.338 } 00:13:26.338 } 00:13:26.338 ] 00:13:26.338 }, 00:13:26.338 { 00:13:26.338 "subsystem": "vmd", 00:13:26.338 "config": [] 00:13:26.338 }, 00:13:26.338 { 00:13:26.338 "subsystem": "accel", 00:13:26.338 "config": [ 00:13:26.338 { 00:13:26.338 "method": "accel_set_options", 00:13:26.338 "params": { 00:13:26.338 "small_cache_size": 128, 00:13:26.338 "large_cache_size": 16, 00:13:26.338 "task_count": 2048, 00:13:26.338 "sequence_count": 2048, 00:13:26.338 "buf_count": 2048 00:13:26.338 } 00:13:26.338 } 00:13:26.338 ] 00:13:26.338 }, 00:13:26.338 { 00:13:26.338 "subsystem": "bdev", 00:13:26.338 "config": [ 00:13:26.338 { 00:13:26.338 "method": "bdev_set_options", 00:13:26.338 "params": { 00:13:26.338 "bdev_io_pool_size": 65535, 00:13:26.338 "bdev_io_cache_size": 256, 00:13:26.338 "bdev_auto_examine": true, 00:13:26.338 "iobuf_small_cache_size": 128, 00:13:26.338 "iobuf_large_cache_size": 16 00:13:26.338 } 00:13:26.338 }, 00:13:26.338 { 00:13:26.338 "method": "bdev_raid_set_options", 00:13:26.338 "params": { 00:13:26.338 "process_window_size_kb": 1024 00:13:26.338 } 00:13:26.338 }, 00:13:26.338 { 00:13:26.338 "method": "bdev_iscsi_set_options", 00:13:26.338 "params": { 00:13:26.338 "timeout_sec": 30 00:13:26.338 } 00:13:26.338 }, 00:13:26.338 { 00:13:26.338 "method": "bdev_nvme_set_options", 00:13:26.338 "params": { 00:13:26.338 "action_on_timeout": "none", 00:13:26.338 "timeout_us": 0, 00:13:26.338 "timeout_admin_us": 0, 00:13:26.338 "keep_alive_timeout_ms": 10000, 00:13:26.338 "transport_retry_count": 4, 00:13:26.338 "arbitration_burst": 0, 00:13:26.338 "low_priority_weight": 0, 00:13:26.338 "medium_priority_weight": 0, 00:13:26.338 "high_priority_weight": 0, 00:13:26.338 "nvme_adminq_poll_period_us": 10000, 00:13:26.338 "nvme_ioq_poll_period_us": 0, 00:13:26.338 "io_queue_requests": 0, 00:13:26.338 "delay_cmd_submit": true, 00:13:26.338 "bdev_retry_count": 3, 00:13:26.338 "transport_ack_timeout": 0, 00:13:26.338 "ctrlr_loss_timeout_sec": 0, 00:13:26.338 "reconnect_delay_sec": 0, 00:13:26.338 "fast_io_fail_timeout_sec": 0, 00:13:26.338 "generate_uuids": false, 00:13:26.338 "transport_tos": 0, 00:13:26.338 "io_path_stat": false, 00:13:26.338 "allow_accel_sequence": false 00:13:26.338 } 00:13:26.338 }, 00:13:26.338 { 00:13:26.338 "method": "bdev_nvme_set_hotplug", 00:13:26.338 "params": { 00:13:26.338 "period_us": 100000, 00:13:26.338 "enable": false 00:13:26.338 } 00:13:26.338 }, 00:13:26.338 { 00:13:26.338 "method": "bdev_malloc_create", 00:13:26.338 "params": { 00:13:26.338 "name": "malloc0", 00:13:26.338 "num_blocks": 8192, 00:13:26.338 "block_size": 4096, 00:13:26.338 "physical_block_size": 4096, 00:13:26.338 "uuid": "09519c92-4748-432d-9a28-0bcae14f90f1", 00:13:26.338 "optimal_io_boundary": 0 00:13:26.338 } 00:13:26.338 }, 00:13:26.338 { 00:13:26.338 "method": "bdev_wait_for_examine" 00:13:26.338 } 00:13:26.338 ] 00:13:26.338 }, 00:13:26.338 { 00:13:26.338 "subsystem": "scsi", 00:13:26.338 "config": null 00:13:26.338 }, 00:13:26.338 { 00:13:26.338 "subsystem": "scheduler", 00:13:26.338 "config": [ 00:13:26.338 { 00:13:26.338 "method": "framework_set_scheduler", 00:13:26.338 "params": { 00:13:26.338 "name": "static" 00:13:26.338 } 00:13:26.338 } 00:13:26.338 ] 00:13:26.338 }, 00:13:26.338 { 00:13:26.338 "subsystem": "vhost_scsi", 00:13:26.338 "config": [] 00:13:26.338 }, 00:13:26.338 { 00:13:26.338 "subsystem": "vhost_blk", 00:13:26.338 "config": [] 00:13:26.338 }, 00:13:26.338 { 00:13:26.338 "subsystem": "ublk", 00:13:26.338 "config": [ 00:13:26.338 { 00:13:26.338 "method": "ublk_create_target", 00:13:26.338 "params": { 00:13:26.338 "cpumask": "1" 00:13:26.338 } 00:13:26.338 }, 00:13:26.338 { 00:13:26.338 "method": "ublk_start_disk", 00:13:26.338 "params": { 00:13:26.338 "bdev_name": "malloc0", 00:13:26.338 "ublk_id": 0, 00:13:26.338 "num_queues": 1, 00:13:26.338 "queue_depth": 128 00:13:26.338 } 00:13:26.338 } 00:13:26.338 ] 00:13:26.338 }, 00:13:26.338 { 00:13:26.338 "subsystem": "nbd", 00:13:26.338 "config": [] 00:13:26.338 }, 00:13:26.338 { 00:13:26.338 "subsystem": "nvmf", 00:13:26.338 "config": [ 00:13:26.338 { 00:13:26.338 "method": "nvmf_set_config", 00:13:26.338 "params": { 00:13:26.338 "discovery_filter": "match_any", 00:13:26.338 "admin_cmd_passthru": { 00:13:26.338 "identify_ctrlr": false 00:13:26.338 } 00:13:26.338 } 00:13:26.338 }, 00:13:26.338 { 00:13:26.338 "method": "nvmf_set_max_subsystems", 00:13:26.338 "params": { 00:13:26.338 "max_subsystems": 1024 00:13:26.338 } 00:13:26.338 }, 00:13:26.338 { 00:13:26.338 "method": "nvmf_set_crdt", 00:13:26.338 "params": { 00:13:26.338 "crdt1": 0, 00:13:26.338 "crdt2": 0, 00:13:26.338 "crdt3": 0 00:13:26.338 } 00:13:26.338 } 00:13:26.338 ] 00:13:26.338 }, 00:13:26.338 { 00:13:26.338 "subsystem": "iscsi", 00:13:26.338 "config": [ 00:13:26.338 { 00:13:26.338 "method": "iscsi_set_options", 00:13:26.338 "params": { 00:13:26.338 "node_base": "iqn.2016-06.io.spdk", 00:13:26.338 "max_sessions": 128, 00:13:26.338 "max_connections_per_session": 2, 00:13:26.338 "max_queue_depth": 64, 00:13:26.338 "default_time2wait": 2, 00:13:26.338 "default_time2retain": 20, 00:13:26.338 "first_burst_length": 8192, 00:13:26.338 "immediate_data": true, 00:13:26.338 "allow_duplicated_isid": false, 00:13:26.338 "error_recovery_level": 0, 00:13:26.338 "nop_timeout": 60, 00:13:26.338 "nop_in_interval": 30, 00:13:26.338 "disable_chap": false, 00:13:26.338 "require_chap": false, 00:13:26.338 "mutual_chap": false, 00:13:26.338 "chap_group": 0, 00:13:26.338 "max_large_datain_per_connection": 64, 00:13:26.338 "max_r2t_per_connection": 4, 00:13:26.338 "pdu_pool_size": 36864, 00:13:26.338 "immediate_data_pool_size": 16384, 00:13:26.338 "data_out_pool_size": 2048 00:13:26.338 } 00:13:26.338 } 00:13:26.338 ] 00:13:26.338 } 00:13:26.338 ] 00:13:26.338 }' 00:13:26.600 [2024-12-08 14:08:29.288083] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:26.600 [2024-12-08 14:08:29.288199] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69111 ] 00:13:26.600 [2024-12-08 14:08:29.431270] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.861 [2024-12-08 14:08:29.610862] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:26.861 [2024-12-08 14:08:29.611059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.433 [2024-12-08 14:08:30.262627] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:27.433 [2024-12-08 14:08:30.268083] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:27.433 [2024-12-08 14:08:30.268150] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:27.433 [2024-12-08 14:08:30.268156] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:27.433 [2024-12-08 14:08:30.268163] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:27.433 [2024-12-08 14:08:30.279070] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:27.433 [2024-12-08 14:08:30.279090] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:27.433 [2024-12-08 14:08:30.286005] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:27.433 [2024-12-08 14:08:30.286085] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:27.433 [2024-12-08 14:08:30.302999] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:28.005 14:08:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:28.005 14:08:30 -- common/autotest_common.sh@862 -- # return 0 00:13:28.005 14:08:30 -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:13:28.005 14:08:30 -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:13:28.005 14:08:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.005 14:08:30 -- common/autotest_common.sh@10 -- # set +x 00:13:28.005 14:08:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.005 14:08:30 -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:28.005 14:08:30 -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:13:28.005 14:08:30 -- ublk/ublk.sh@125 -- # killprocess 69111 00:13:28.005 14:08:30 -- common/autotest_common.sh@936 -- # '[' -z 69111 ']' 00:13:28.005 14:08:30 -- common/autotest_common.sh@940 -- # kill -0 69111 00:13:28.005 14:08:30 -- common/autotest_common.sh@941 -- # uname 00:13:28.005 14:08:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:28.005 14:08:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69111 00:13:28.005 14:08:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:28.005 14:08:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:28.005 14:08:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69111' 00:13:28.005 killing process with pid 69111 00:13:28.005 14:08:30 -- common/autotest_common.sh@955 -- # kill 69111 00:13:28.005 14:08:30 -- common/autotest_common.sh@960 -- # wait 69111 00:13:29.446 [2024-12-08 14:08:31.996155] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:29.446 [2024-12-08 14:08:32.035020] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:29.446 [2024-12-08 14:08:32.035121] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:29.446 [2024-12-08 14:08:32.043018] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:29.446 [2024-12-08 14:08:32.043062] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:29.446 [2024-12-08 14:08:32.043069] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:29.446 [2024-12-08 14:08:32.043091] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:29.446 [2024-12-08 14:08:32.043211] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:30.387 14:08:33 -- ublk/ublk.sh@126 -- # trap - EXIT 00:13:30.387 ************************************ 00:13:30.387 END TEST test_save_ublk_config 00:13:30.387 ************************************ 00:13:30.387 00:13:30.387 real 0m8.621s 00:13:30.387 user 0m6.032s 00:13:30.387 sys 0m3.536s 00:13:30.387 14:08:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:30.387 14:08:33 -- common/autotest_common.sh@10 -- # set +x 00:13:30.648 14:08:33 -- ublk/ublk.sh@139 -- # spdk_pid=69191 00:13:30.648 14:08:33 -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:30.648 14:08:33 -- ublk/ublk.sh@141 -- # waitforlisten 69191 00:13:30.648 14:08:33 -- common/autotest_common.sh@829 -- # '[' -z 69191 ']' 00:13:30.648 14:08:33 -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:30.648 14:08:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:30.648 14:08:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:30.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:30.648 14:08:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:30.648 14:08:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:30.648 14:08:33 -- common/autotest_common.sh@10 -- # set +x 00:13:30.648 [2024-12-08 14:08:33.400743] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:30.649 [2024-12-08 14:08:33.400839] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69191 ] 00:13:30.649 [2024-12-08 14:08:33.544695] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:30.909 [2024-12-08 14:08:33.717870] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:30.909 [2024-12-08 14:08:33.718273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:30.909 [2024-12-08 14:08:33.718369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.295 14:08:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:32.296 14:08:34 -- common/autotest_common.sh@862 -- # return 0 00:13:32.296 14:08:34 -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:13:32.296 14:08:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:32.296 14:08:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:32.296 14:08:34 -- common/autotest_common.sh@10 -- # set +x 00:13:32.296 ************************************ 00:13:32.296 START TEST test_create_ublk 00:13:32.296 ************************************ 00:13:32.296 14:08:34 -- common/autotest_common.sh@1114 -- # test_create_ublk 00:13:32.296 14:08:34 -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:13:32.296 14:08:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.296 14:08:34 -- common/autotest_common.sh@10 -- # set +x 00:13:32.296 [2024-12-08 14:08:34.921703] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:32.296 14:08:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.296 14:08:34 -- ublk/ublk.sh@33 -- # ublk_target= 00:13:32.296 14:08:34 -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:13:32.296 14:08:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.296 14:08:34 -- common/autotest_common.sh@10 -- # set +x 00:13:32.296 14:08:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.296 14:08:35 -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:13:32.296 14:08:35 -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:32.296 14:08:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.296 14:08:35 -- common/autotest_common.sh@10 -- # set +x 00:13:32.296 [2024-12-08 14:08:35.096122] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:32.296 [2024-12-08 14:08:35.096453] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:32.296 [2024-12-08 14:08:35.096461] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:32.296 [2024-12-08 14:08:35.096468] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:32.296 [2024-12-08 14:08:35.105229] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:32.296 [2024-12-08 14:08:35.105257] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:32.296 [2024-12-08 14:08:35.112010] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:32.296 [2024-12-08 14:08:35.123172] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:32.296 [2024-12-08 14:08:35.147005] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:32.296 14:08:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.296 14:08:35 -- ublk/ublk.sh@37 -- # ublk_id=0 00:13:32.296 14:08:35 -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:13:32.296 14:08:35 -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:13:32.296 14:08:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.296 14:08:35 -- common/autotest_common.sh@10 -- # set +x 00:13:32.296 14:08:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.296 14:08:35 -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:13:32.296 { 00:13:32.296 "ublk_device": "/dev/ublkb0", 00:13:32.296 "id": 0, 00:13:32.296 "queue_depth": 512, 00:13:32.296 "num_queues": 4, 00:13:32.296 "bdev_name": "Malloc0" 00:13:32.296 } 00:13:32.296 ]' 00:13:32.296 14:08:35 -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:13:32.296 14:08:35 -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:32.296 14:08:35 -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:13:32.557 14:08:35 -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:13:32.557 14:08:35 -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:13:32.557 14:08:35 -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:13:32.557 14:08:35 -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:13:32.557 14:08:35 -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:13:32.557 14:08:35 -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:13:32.557 14:08:35 -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:32.557 14:08:35 -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:13:32.557 14:08:35 -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:13:32.557 14:08:35 -- lvol/common.sh@41 -- # local offset=0 00:13:32.557 14:08:35 -- lvol/common.sh@42 -- # local size=134217728 00:13:32.557 14:08:35 -- lvol/common.sh@43 -- # local rw=write 00:13:32.557 14:08:35 -- lvol/common.sh@44 -- # local pattern=0xcc 00:13:32.557 14:08:35 -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:13:32.557 14:08:35 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:13:32.557 14:08:35 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:13:32.557 14:08:35 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:32.557 14:08:35 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:32.557 14:08:35 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:13:32.557 fio: verification read phase will never start because write phase uses all of runtime 00:13:32.557 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:13:32.557 fio-3.35 00:13:32.557 Starting 1 process 00:13:44.792 00:13:44.792 fio_test: (groupid=0, jobs=1): err= 0: pid=69244: Sun Dec 8 14:08:45 2024 00:13:44.792 write: IOPS=13.6k, BW=53.3MiB/s (55.8MB/s)(533MiB/10001msec); 0 zone resets 00:13:44.792 clat (usec): min=45, max=8023, avg=72.64, stdev=134.09 00:13:44.792 lat (usec): min=45, max=8024, avg=73.04, stdev=134.10 00:13:44.792 clat percentiles (usec): 00:13:44.792 | 1.00th=[ 55], 5.00th=[ 58], 10.00th=[ 59], 20.00th=[ 61], 00:13:44.792 | 30.00th=[ 63], 40.00th=[ 65], 50.00th=[ 67], 60.00th=[ 69], 00:13:44.792 | 70.00th=[ 70], 80.00th=[ 72], 90.00th=[ 75], 95.00th=[ 78], 00:13:44.792 | 99.00th=[ 87], 99.50th=[ 96], 99.90th=[ 2966], 99.95th=[ 3523], 00:13:44.792 | 99.99th=[ 4047] 00:13:44.792 bw ( KiB/s): min=24968, max=59880, per=99.81%, avg=54432.42, stdev=7572.38, samples=19 00:13:44.792 iops : min= 6242, max=14970, avg=13608.11, stdev=1893.09, samples=19 00:13:44.792 lat (usec) : 50=0.01%, 100=99.55%, 250=0.17%, 500=0.02%, 750=0.01% 00:13:44.792 lat (usec) : 1000=0.02% 00:13:44.792 lat (msec) : 2=0.06%, 4=0.14%, 10=0.01% 00:13:44.792 cpu : usr=1.58%, sys=12.85%, ctx=136350, majf=0, minf=796 00:13:44.792 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:44.792 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:44.792 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:44.792 issued rwts: total=0,136349,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:44.792 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:44.792 00:13:44.792 Run status group 0 (all jobs): 00:13:44.792 WRITE: bw=53.3MiB/s (55.8MB/s), 53.3MiB/s-53.3MiB/s (55.8MB/s-55.8MB/s), io=533MiB (558MB), run=10001-10001msec 00:13:44.792 00:13:44.792 Disk stats (read/write): 00:13:44.792 ublkb0: ios=0/134827, merge=0/0, ticks=0/8364, in_queue=8365, util=99.09% 00:13:44.792 14:08:45 -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:13:44.792 14:08:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.792 14:08:45 -- common/autotest_common.sh@10 -- # set +x 00:13:44.792 [2024-12-08 14:08:45.563292] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:44.792 [2024-12-08 14:08:45.607042] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:44.792 [2024-12-08 14:08:45.607728] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:44.792 [2024-12-08 14:08:45.615027] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:44.792 [2024-12-08 14:08:45.615288] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:44.792 [2024-12-08 14:08:45.615304] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:44.792 14:08:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.792 14:08:45 -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:13:44.792 14:08:45 -- common/autotest_common.sh@650 -- # local es=0 00:13:44.792 14:08:45 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:13:44.792 14:08:45 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:44.792 14:08:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:44.792 14:08:45 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:44.792 14:08:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:44.792 14:08:45 -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:13:44.792 14:08:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.792 14:08:45 -- common/autotest_common.sh@10 -- # set +x 00:13:44.792 [2024-12-08 14:08:45.631086] ublk.c:1049:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:13:44.792 request: 00:13:44.792 { 00:13:44.792 "ublk_id": 0, 00:13:44.792 "method": "ublk_stop_disk", 00:13:44.792 "req_id": 1 00:13:44.792 } 00:13:44.792 Got JSON-RPC error response 00:13:44.792 response: 00:13:44.792 { 00:13:44.792 "code": -19, 00:13:44.792 "message": "No such device" 00:13:44.792 } 00:13:44.792 14:08:45 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:44.792 14:08:45 -- common/autotest_common.sh@653 -- # es=1 00:13:44.792 14:08:45 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:44.792 14:08:45 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:44.792 14:08:45 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:44.792 14:08:45 -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:13:44.792 14:08:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.792 14:08:45 -- common/autotest_common.sh@10 -- # set +x 00:13:44.792 [2024-12-08 14:08:45.647053] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:44.792 [2024-12-08 14:08:45.654999] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:44.792 [2024-12-08 14:08:45.655027] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:13:44.792 14:08:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.792 14:08:45 -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:44.792 14:08:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.792 14:08:45 -- common/autotest_common.sh@10 -- # set +x 00:13:44.792 14:08:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.792 14:08:46 -- ublk/ublk.sh@57 -- # check_leftover_devices 00:13:44.792 14:08:46 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:44.792 14:08:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.792 14:08:46 -- common/autotest_common.sh@10 -- # set +x 00:13:44.792 14:08:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.792 14:08:46 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:44.792 14:08:46 -- lvol/common.sh@26 -- # jq length 00:13:44.792 14:08:46 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:44.792 14:08:46 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:44.792 14:08:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.792 14:08:46 -- common/autotest_common.sh@10 -- # set +x 00:13:44.792 14:08:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.792 14:08:46 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:44.792 14:08:46 -- lvol/common.sh@28 -- # jq length 00:13:44.792 14:08:46 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:44.793 00:13:44.793 real 0m11.211s 00:13:44.793 user 0m0.451s 00:13:44.793 sys 0m1.369s 00:13:44.793 14:08:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:44.793 14:08:46 -- common/autotest_common.sh@10 -- # set +x 00:13:44.793 ************************************ 00:13:44.793 END TEST test_create_ublk 00:13:44.793 ************************************ 00:13:44.793 14:08:46 -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:13:44.793 14:08:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:44.793 14:08:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:44.793 14:08:46 -- common/autotest_common.sh@10 -- # set +x 00:13:44.793 ************************************ 00:13:44.793 START TEST test_create_multi_ublk 00:13:44.793 ************************************ 00:13:44.793 14:08:46 -- common/autotest_common.sh@1114 -- # test_create_multi_ublk 00:13:44.793 14:08:46 -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:13:44.793 14:08:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.793 14:08:46 -- common/autotest_common.sh@10 -- # set +x 00:13:44.793 [2024-12-08 14:08:46.171629] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:44.793 14:08:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.793 14:08:46 -- ublk/ublk.sh@62 -- # ublk_target= 00:13:44.793 14:08:46 -- ublk/ublk.sh@64 -- # seq 0 3 00:13:44.793 14:08:46 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:44.793 14:08:46 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:13:44.793 14:08:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.793 14:08:46 -- common/autotest_common.sh@10 -- # set +x 00:13:44.793 14:08:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.793 14:08:46 -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:13:44.793 14:08:46 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:44.793 14:08:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.793 14:08:46 -- common/autotest_common.sh@10 -- # set +x 00:13:44.793 [2024-12-08 14:08:46.410111] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:44.793 [2024-12-08 14:08:46.410436] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:44.793 [2024-12-08 14:08:46.410448] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:44.793 [2024-12-08 14:08:46.410455] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:44.793 [2024-12-08 14:08:46.430003] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:44.793 [2024-12-08 14:08:46.430027] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:44.793 [2024-12-08 14:08:46.442009] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:44.793 [2024-12-08 14:08:46.442536] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:44.793 [2024-12-08 14:08:46.469013] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:44.793 14:08:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.793 14:08:46 -- ublk/ublk.sh@68 -- # ublk_id=0 00:13:44.793 14:08:46 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:44.793 14:08:46 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:13:44.793 14:08:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.793 14:08:46 -- common/autotest_common.sh@10 -- # set +x 00:13:44.793 14:08:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.793 14:08:46 -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:13:44.793 14:08:46 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:13:44.793 14:08:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.793 14:08:46 -- common/autotest_common.sh@10 -- # set +x 00:13:44.793 [2024-12-08 14:08:46.701098] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:13:44.793 [2024-12-08 14:08:46.701438] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:13:44.793 [2024-12-08 14:08:46.701454] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:44.793 [2024-12-08 14:08:46.701459] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:13:44.793 [2024-12-08 14:08:46.709019] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:44.793 [2024-12-08 14:08:46.709037] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:44.793 [2024-12-08 14:08:46.717008] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:44.793 [2024-12-08 14:08:46.717541] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:13:44.793 [2024-12-08 14:08:46.734013] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:13:44.793 14:08:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.793 14:08:46 -- ublk/ublk.sh@68 -- # ublk_id=1 00:13:44.793 14:08:46 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:44.793 14:08:46 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:13:44.793 14:08:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.793 14:08:46 -- common/autotest_common.sh@10 -- # set +x 00:13:44.793 14:08:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.793 14:08:46 -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:13:44.793 14:08:46 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:13:44.793 14:08:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.793 14:08:46 -- common/autotest_common.sh@10 -- # set +x 00:13:44.793 [2024-12-08 14:08:46.917115] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:13:44.793 [2024-12-08 14:08:46.917448] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:13:44.793 [2024-12-08 14:08:46.917461] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:13:44.793 [2024-12-08 14:08:46.917470] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:13:44.793 [2024-12-08 14:08:46.925018] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:44.793 [2024-12-08 14:08:46.925039] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:44.793 [2024-12-08 14:08:46.933009] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:44.793 [2024-12-08 14:08:46.933559] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:13:44.793 [2024-12-08 14:08:46.942019] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:13:44.793 14:08:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.793 14:08:46 -- ublk/ublk.sh@68 -- # ublk_id=2 00:13:44.793 14:08:46 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:44.793 14:08:46 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:13:44.793 14:08:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.793 14:08:46 -- common/autotest_common.sh@10 -- # set +x 00:13:44.793 14:08:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.793 14:08:47 -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:13:44.793 14:08:47 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:13:44.793 14:08:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.793 14:08:47 -- common/autotest_common.sh@10 -- # set +x 00:13:44.793 [2024-12-08 14:08:47.117108] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:13:44.793 [2024-12-08 14:08:47.117433] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:13:44.793 [2024-12-08 14:08:47.117448] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:13:44.793 [2024-12-08 14:08:47.117453] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:13:44.793 [2024-12-08 14:08:47.125020] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:44.793 [2024-12-08 14:08:47.125037] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:44.793 [2024-12-08 14:08:47.133012] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:44.793 [2024-12-08 14:08:47.133551] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:13:44.793 [2024-12-08 14:08:47.142026] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:13:44.793 14:08:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.793 14:08:47 -- ublk/ublk.sh@68 -- # ublk_id=3 00:13:44.793 14:08:47 -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:13:44.793 14:08:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.793 14:08:47 -- common/autotest_common.sh@10 -- # set +x 00:13:44.793 14:08:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.793 14:08:47 -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:13:44.793 { 00:13:44.793 "ublk_device": "/dev/ublkb0", 00:13:44.793 "id": 0, 00:13:44.793 "queue_depth": 512, 00:13:44.793 "num_queues": 4, 00:13:44.793 "bdev_name": "Malloc0" 00:13:44.793 }, 00:13:44.793 { 00:13:44.793 "ublk_device": "/dev/ublkb1", 00:13:44.793 "id": 1, 00:13:44.793 "queue_depth": 512, 00:13:44.793 "num_queues": 4, 00:13:44.793 "bdev_name": "Malloc1" 00:13:44.793 }, 00:13:44.793 { 00:13:44.793 "ublk_device": "/dev/ublkb2", 00:13:44.793 "id": 2, 00:13:44.793 "queue_depth": 512, 00:13:44.793 "num_queues": 4, 00:13:44.793 "bdev_name": "Malloc2" 00:13:44.793 }, 00:13:44.793 { 00:13:44.793 "ublk_device": "/dev/ublkb3", 00:13:44.794 "id": 3, 00:13:44.794 "queue_depth": 512, 00:13:44.794 "num_queues": 4, 00:13:44.794 "bdev_name": "Malloc3" 00:13:44.794 } 00:13:44.794 ]' 00:13:44.794 14:08:47 -- ublk/ublk.sh@72 -- # seq 0 3 00:13:44.794 14:08:47 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:44.794 14:08:47 -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:13:44.794 14:08:47 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:44.794 14:08:47 -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:13:44.794 14:08:47 -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:13:44.794 14:08:47 -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:13:44.794 14:08:47 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:44.794 14:08:47 -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:13:44.794 14:08:47 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:44.794 14:08:47 -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:13:44.794 14:08:47 -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:44.794 14:08:47 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:44.794 14:08:47 -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:13:44.794 14:08:47 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:13:44.794 14:08:47 -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:13:44.794 14:08:47 -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:13:44.794 14:08:47 -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:13:44.794 14:08:47 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:44.794 14:08:47 -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:13:44.794 14:08:47 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:44.794 14:08:47 -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:13:44.794 14:08:47 -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:13:44.794 14:08:47 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:44.794 14:08:47 -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:13:44.794 14:08:47 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:13:44.794 14:08:47 -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:13:44.794 14:08:47 -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:13:44.794 14:08:47 -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:13:44.794 14:08:47 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:44.794 14:08:47 -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:13:44.794 14:08:47 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:44.794 14:08:47 -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:13:44.794 14:08:47 -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:13:44.794 14:08:47 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:44.794 14:08:47 -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:13:44.794 14:08:47 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:13:44.794 14:08:47 -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:13:44.794 14:08:47 -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:13:44.794 14:08:47 -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:13:45.052 14:08:47 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:45.052 14:08:47 -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:13:45.052 14:08:47 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:45.052 14:08:47 -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:13:45.052 14:08:47 -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:13:45.052 14:08:47 -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:13:45.052 14:08:47 -- ublk/ublk.sh@85 -- # seq 0 3 00:13:45.052 14:08:47 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:45.052 14:08:47 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:13:45.052 14:08:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.052 14:08:47 -- common/autotest_common.sh@10 -- # set +x 00:13:45.052 [2024-12-08 14:08:47.797074] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:45.052 [2024-12-08 14:08:47.837048] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:45.052 [2024-12-08 14:08:47.837857] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:45.052 [2024-12-08 14:08:47.845012] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:45.052 [2024-12-08 14:08:47.845263] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:45.052 [2024-12-08 14:08:47.845278] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:45.052 14:08:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.052 14:08:47 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:45.052 14:08:47 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:13:45.052 14:08:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.052 14:08:47 -- common/autotest_common.sh@10 -- # set +x 00:13:45.052 [2024-12-08 14:08:47.861064] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:13:45.052 [2024-12-08 14:08:47.899045] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:45.052 [2024-12-08 14:08:47.899807] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:13:45.052 [2024-12-08 14:08:47.910029] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:45.052 [2024-12-08 14:08:47.910269] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:13:45.052 [2024-12-08 14:08:47.910282] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:13:45.052 14:08:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.052 14:08:47 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:45.052 14:08:47 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:13:45.052 14:08:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.052 14:08:47 -- common/autotest_common.sh@10 -- # set +x 00:13:45.052 [2024-12-08 14:08:47.925061] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:13:45.052 [2024-12-08 14:08:47.957033] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:45.052 [2024-12-08 14:08:47.957754] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:13:45.052 [2024-12-08 14:08:47.966031] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:45.052 [2024-12-08 14:08:47.966277] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:13:45.052 [2024-12-08 14:08:47.966287] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:13:45.309 14:08:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.309 14:08:47 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:45.309 14:08:47 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:13:45.309 14:08:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.309 14:08:47 -- common/autotest_common.sh@10 -- # set +x 00:13:45.309 [2024-12-08 14:08:47.981064] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:13:45.309 [2024-12-08 14:08:48.008513] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:45.309 [2024-12-08 14:08:48.009504] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:13:45.309 [2024-12-08 14:08:48.016015] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:45.310 [2024-12-08 14:08:48.016243] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:13:45.310 [2024-12-08 14:08:48.016256] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:13:45.310 14:08:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.310 14:08:48 -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:13:45.310 [2024-12-08 14:08:48.200070] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:45.310 [2024-12-08 14:08:48.207999] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:45.310 [2024-12-08 14:08:48.208027] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:13:45.567 14:08:48 -- ublk/ublk.sh@93 -- # seq 0 3 00:13:45.567 14:08:48 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:45.567 14:08:48 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:45.567 14:08:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.567 14:08:48 -- common/autotest_common.sh@10 -- # set +x 00:13:45.825 14:08:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.825 14:08:48 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:45.825 14:08:48 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:13:45.825 14:08:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.825 14:08:48 -- common/autotest_common.sh@10 -- # set +x 00:13:46.082 14:08:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.082 14:08:48 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:46.082 14:08:48 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:13:46.082 14:08:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.082 14:08:48 -- common/autotest_common.sh@10 -- # set +x 00:13:46.340 14:08:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.340 14:08:49 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:46.340 14:08:49 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:13:46.340 14:08:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.340 14:08:49 -- common/autotest_common.sh@10 -- # set +x 00:13:46.598 14:08:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.598 14:08:49 -- ublk/ublk.sh@96 -- # check_leftover_devices 00:13:46.598 14:08:49 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:46.598 14:08:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.598 14:08:49 -- common/autotest_common.sh@10 -- # set +x 00:13:46.598 14:08:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.598 14:08:49 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:46.598 14:08:49 -- lvol/common.sh@26 -- # jq length 00:13:46.598 14:08:49 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:46.598 14:08:49 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:46.598 14:08:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.598 14:08:49 -- common/autotest_common.sh@10 -- # set +x 00:13:46.598 14:08:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.598 14:08:49 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:46.598 14:08:49 -- lvol/common.sh@28 -- # jq length 00:13:46.598 14:08:49 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:46.598 00:13:46.598 real 0m3.307s 00:13:46.598 user 0m0.802s 00:13:46.598 sys 0m0.132s 00:13:46.598 ************************************ 00:13:46.598 END TEST test_create_multi_ublk 00:13:46.598 ************************************ 00:13:46.598 14:08:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:46.598 14:08:49 -- common/autotest_common.sh@10 -- # set +x 00:13:46.598 14:08:49 -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:13:46.598 14:08:49 -- ublk/ublk.sh@147 -- # cleanup 00:13:46.598 14:08:49 -- ublk/ublk.sh@130 -- # killprocess 69191 00:13:46.598 14:08:49 -- common/autotest_common.sh@936 -- # '[' -z 69191 ']' 00:13:46.598 14:08:49 -- common/autotest_common.sh@940 -- # kill -0 69191 00:13:46.599 14:08:49 -- common/autotest_common.sh@941 -- # uname 00:13:46.599 14:08:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:46.599 14:08:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69191 00:13:46.857 killing process with pid 69191 00:13:46.857 14:08:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:46.857 14:08:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:46.857 14:08:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69191' 00:13:46.857 14:08:49 -- common/autotest_common.sh@955 -- # kill 69191 00:13:46.857 14:08:49 -- common/autotest_common.sh@960 -- # wait 69191 00:13:47.423 [2024-12-08 14:08:50.091167] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:47.423 [2024-12-08 14:08:50.091375] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:47.996 00:13:47.996 real 0m26.316s 00:13:47.996 user 0m37.178s 00:13:47.996 sys 0m9.977s 00:13:47.996 14:08:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:47.996 ************************************ 00:13:47.996 END TEST ublk 00:13:47.996 ************************************ 00:13:47.996 14:08:50 -- common/autotest_common.sh@10 -- # set +x 00:13:47.996 14:08:50 -- spdk/autotest.sh@247 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:13:47.996 14:08:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:47.996 14:08:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:47.996 14:08:50 -- common/autotest_common.sh@10 -- # set +x 00:13:47.996 ************************************ 00:13:47.996 START TEST ublk_recovery 00:13:47.996 ************************************ 00:13:47.996 14:08:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:13:48.259 * Looking for test storage... 00:13:48.259 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:48.259 14:08:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:48.259 14:08:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:48.259 14:08:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:48.259 14:08:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:48.259 14:08:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:48.259 14:08:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:48.259 14:08:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:48.259 14:08:50 -- scripts/common.sh@335 -- # IFS=.-: 00:13:48.259 14:08:50 -- scripts/common.sh@335 -- # read -ra ver1 00:13:48.259 14:08:50 -- scripts/common.sh@336 -- # IFS=.-: 00:13:48.259 14:08:50 -- scripts/common.sh@336 -- # read -ra ver2 00:13:48.259 14:08:50 -- scripts/common.sh@337 -- # local 'op=<' 00:13:48.259 14:08:50 -- scripts/common.sh@339 -- # ver1_l=2 00:13:48.259 14:08:50 -- scripts/common.sh@340 -- # ver2_l=1 00:13:48.259 14:08:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:48.259 14:08:50 -- scripts/common.sh@343 -- # case "$op" in 00:13:48.259 14:08:50 -- scripts/common.sh@344 -- # : 1 00:13:48.259 14:08:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:48.259 14:08:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:48.259 14:08:50 -- scripts/common.sh@364 -- # decimal 1 00:13:48.259 14:08:50 -- scripts/common.sh@352 -- # local d=1 00:13:48.259 14:08:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:48.259 14:08:50 -- scripts/common.sh@354 -- # echo 1 00:13:48.259 14:08:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:48.259 14:08:50 -- scripts/common.sh@365 -- # decimal 2 00:13:48.259 14:08:51 -- scripts/common.sh@352 -- # local d=2 00:13:48.259 14:08:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:48.259 14:08:51 -- scripts/common.sh@354 -- # echo 2 00:13:48.259 14:08:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:48.259 14:08:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:48.259 14:08:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:48.259 14:08:51 -- scripts/common.sh@367 -- # return 0 00:13:48.259 14:08:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:48.259 14:08:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:48.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.259 --rc genhtml_branch_coverage=1 00:13:48.259 --rc genhtml_function_coverage=1 00:13:48.259 --rc genhtml_legend=1 00:13:48.259 --rc geninfo_all_blocks=1 00:13:48.259 --rc geninfo_unexecuted_blocks=1 00:13:48.259 00:13:48.259 ' 00:13:48.259 14:08:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:48.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.259 --rc genhtml_branch_coverage=1 00:13:48.259 --rc genhtml_function_coverage=1 00:13:48.259 --rc genhtml_legend=1 00:13:48.259 --rc geninfo_all_blocks=1 00:13:48.259 --rc geninfo_unexecuted_blocks=1 00:13:48.259 00:13:48.259 ' 00:13:48.259 14:08:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:48.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.259 --rc genhtml_branch_coverage=1 00:13:48.259 --rc genhtml_function_coverage=1 00:13:48.259 --rc genhtml_legend=1 00:13:48.259 --rc geninfo_all_blocks=1 00:13:48.259 --rc geninfo_unexecuted_blocks=1 00:13:48.259 00:13:48.259 ' 00:13:48.259 14:08:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:48.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.259 --rc genhtml_branch_coverage=1 00:13:48.259 --rc genhtml_function_coverage=1 00:13:48.259 --rc genhtml_legend=1 00:13:48.259 --rc geninfo_all_blocks=1 00:13:48.259 --rc geninfo_unexecuted_blocks=1 00:13:48.259 00:13:48.259 ' 00:13:48.259 14:08:51 -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:48.259 14:08:51 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:48.259 14:08:51 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:48.259 14:08:51 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:48.259 14:08:51 -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:48.259 14:08:51 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:48.259 14:08:51 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:48.259 14:08:51 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:48.259 14:08:51 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:48.259 14:08:51 -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:13:48.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:48.259 14:08:51 -- ublk/ublk_recovery.sh@19 -- # spdk_pid=69595 00:13:48.259 14:08:51 -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:48.259 14:08:51 -- ublk/ublk_recovery.sh@21 -- # waitforlisten 69595 00:13:48.259 14:08:51 -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:48.259 14:08:51 -- common/autotest_common.sh@829 -- # '[' -z 69595 ']' 00:13:48.259 14:08:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:48.259 14:08:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:48.259 14:08:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:48.259 14:08:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:48.259 14:08:51 -- common/autotest_common.sh@10 -- # set +x 00:13:48.259 [2024-12-08 14:08:51.083045] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:48.259 [2024-12-08 14:08:51.083165] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69595 ] 00:13:48.521 [2024-12-08 14:08:51.232434] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:48.521 [2024-12-08 14:08:51.402104] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:48.521 [2024-12-08 14:08:51.402665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:48.521 [2024-12-08 14:08:51.402718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.908 14:08:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:49.908 14:08:52 -- common/autotest_common.sh@862 -- # return 0 00:13:49.908 14:08:52 -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:13:49.908 14:08:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.908 14:08:52 -- common/autotest_common.sh@10 -- # set +x 00:13:49.908 [2024-12-08 14:08:52.583660] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:49.908 14:08:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.908 14:08:52 -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:13:49.908 14:08:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.908 14:08:52 -- common/autotest_common.sh@10 -- # set +x 00:13:49.908 malloc0 00:13:49.908 14:08:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.908 14:08:52 -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:13:49.908 14:08:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.908 14:08:52 -- common/autotest_common.sh@10 -- # set +x 00:13:49.908 [2024-12-08 14:08:52.678116] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:13:49.908 [2024-12-08 14:08:52.678214] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:13:49.908 [2024-12-08 14:08:52.678221] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:49.908 [2024-12-08 14:08:52.678229] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:13:49.908 [2024-12-08 14:08:52.687103] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:49.908 [2024-12-08 14:08:52.687127] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:49.908 [2024-12-08 14:08:52.694006] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:49.908 [2024-12-08 14:08:52.694131] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:13:49.908 [2024-12-08 14:08:52.711005] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:13:49.908 1 00:13:49.908 14:08:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.908 14:08:52 -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:13:50.844 14:08:53 -- ublk/ublk_recovery.sh@31 -- # fio_proc=69632 00:13:50.844 14:08:53 -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:13:50.844 14:08:53 -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:13:51.103 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:51.103 fio-3.35 00:13:51.103 Starting 1 process 00:13:56.394 14:08:58 -- ublk/ublk_recovery.sh@36 -- # kill -9 69595 00:13:56.394 14:08:58 -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:14:01.687 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 69595 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:14:01.687 14:09:03 -- ublk/ublk_recovery.sh@42 -- # spdk_pid=69748 00:14:01.687 14:09:03 -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:01.687 14:09:03 -- ublk/ublk_recovery.sh@44 -- # waitforlisten 69748 00:14:01.687 14:09:03 -- common/autotest_common.sh@829 -- # '[' -z 69748 ']' 00:14:01.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:01.687 14:09:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:01.687 14:09:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:01.687 14:09:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:01.687 14:09:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:01.687 14:09:03 -- common/autotest_common.sh@10 -- # set +x 00:14:01.687 14:09:03 -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:01.687 [2024-12-08 14:09:03.822554] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:01.687 [2024-12-08 14:09:03.822704] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69748 ] 00:14:01.687 [2024-12-08 14:09:03.976102] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:01.687 [2024-12-08 14:09:04.225889] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:01.687 [2024-12-08 14:09:04.226271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:01.687 [2024-12-08 14:09:04.226288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.621 14:09:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:02.621 14:09:05 -- common/autotest_common.sh@862 -- # return 0 00:14:02.621 14:09:05 -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:14:02.621 14:09:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.621 14:09:05 -- common/autotest_common.sh@10 -- # set +x 00:14:02.621 [2024-12-08 14:09:05.334036] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:02.621 14:09:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.621 14:09:05 -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:02.621 14:09:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.621 14:09:05 -- common/autotest_common.sh@10 -- # set +x 00:14:02.621 malloc0 00:14:02.621 14:09:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.621 14:09:05 -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:14:02.621 14:09:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.621 14:09:05 -- common/autotest_common.sh@10 -- # set +x 00:14:02.621 [2024-12-08 14:09:05.444146] ublk.c:2073:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:14:02.621 [2024-12-08 14:09:05.444191] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:02.621 [2024-12-08 14:09:05.444200] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:02.621 [2024-12-08 14:09:05.451007] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:02.621 [2024-12-08 14:09:05.451030] ublk.c:2002:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:14:02.621 [2024-12-08 14:09:05.451113] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:14:02.621 1 00:14:02.621 14:09:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.621 14:09:05 -- ublk/ublk_recovery.sh@52 -- # wait 69632 00:14:29.151 [2024-12-08 14:09:29.171010] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:14:29.151 [2024-12-08 14:09:29.177810] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:14:29.151 [2024-12-08 14:09:29.185233] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:14:29.151 [2024-12-08 14:09:29.185260] ublk.c: 377:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:14:51.071 00:14:51.071 fio_test: (groupid=0, jobs=1): err= 0: pid=69640: Sun Dec 8 14:09:53 2024 00:14:51.071 read: IOPS=13.7k, BW=53.5MiB/s (56.1MB/s)(3210MiB/60002msec) 00:14:51.071 slat (nsec): min=1363, max=322958, avg=5542.14, stdev=1515.08 00:14:51.071 clat (usec): min=789, max=30470k, avg=4734.48, stdev=275080.64 00:14:51.071 lat (usec): min=795, max=30470k, avg=4740.02, stdev=275080.63 00:14:51.071 clat percentiles (usec): 00:14:51.071 | 1.00th=[ 1876], 5.00th=[ 2008], 10.00th=[ 2040], 20.00th=[ 2073], 00:14:51.071 | 30.00th=[ 2089], 40.00th=[ 2114], 50.00th=[ 2114], 60.00th=[ 2147], 00:14:51.071 | 70.00th=[ 2147], 80.00th=[ 2180], 90.00th=[ 2245], 95.00th=[ 3195], 00:14:51.071 | 99.00th=[ 5276], 99.50th=[ 5735], 99.90th=[ 7177], 99.95th=[12387], 00:14:51.071 | 99.99th=[13173] 00:14:51.071 bw ( KiB/s): min=28952, max=115192, per=100.00%, avg=109659.12, stdev=14836.85, samples=59 00:14:51.071 iops : min= 7238, max=28798, avg=27414.78, stdev=3709.21, samples=59 00:14:51.071 write: IOPS=13.7k, BW=53.4MiB/s (56.0MB/s)(3207MiB/60002msec); 0 zone resets 00:14:51.071 slat (nsec): min=1456, max=209807, avg=5772.74, stdev=1455.34 00:14:51.071 clat (usec): min=681, max=30471k, avg=4603.72, stdev=262626.63 00:14:51.071 lat (usec): min=686, max=30471k, avg=4609.49, stdev=262626.63 00:14:51.071 clat percentiles (usec): 00:14:51.071 | 1.00th=[ 1926], 5.00th=[ 2114], 10.00th=[ 2147], 20.00th=[ 2180], 00:14:51.071 | 30.00th=[ 2180], 40.00th=[ 2212], 50.00th=[ 2212], 60.00th=[ 2245], 00:14:51.071 | 70.00th=[ 2245], 80.00th=[ 2278], 90.00th=[ 2343], 95.00th=[ 3130], 00:14:51.071 | 99.00th=[ 5342], 99.50th=[ 5866], 99.90th=[ 7242], 99.95th=[12256], 00:14:51.071 | 99.99th=[13304] 00:14:51.071 bw ( KiB/s): min=29544, max=115096, per=100.00%, avg=109509.02, stdev=14712.66, samples=59 00:14:51.071 iops : min= 7386, max=28774, avg=27377.25, stdev=3678.16, samples=59 00:14:51.071 lat (usec) : 750=0.01%, 1000=0.01% 00:14:51.071 lat (msec) : 2=2.67%, 4=94.39%, 10=2.88%, 20=0.04%, >=2000=0.01% 00:14:51.071 cpu : usr=3.05%, sys=15.91%, ctx=54078, majf=0, minf=14 00:14:51.071 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:14:51.071 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:51.071 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:51.071 issued rwts: total=821832,820896,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:51.071 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:51.071 00:14:51.071 Run status group 0 (all jobs): 00:14:51.071 READ: bw=53.5MiB/s (56.1MB/s), 53.5MiB/s-53.5MiB/s (56.1MB/s-56.1MB/s), io=3210MiB (3366MB), run=60002-60002msec 00:14:51.071 WRITE: bw=53.4MiB/s (56.0MB/s), 53.4MiB/s-53.4MiB/s (56.0MB/s-56.0MB/s), io=3207MiB (3362MB), run=60002-60002msec 00:14:51.071 00:14:51.071 Disk stats (read/write): 00:14:51.071 ublkb1: ios=818780/817791, merge=0/0, ticks=3837874/3654827, in_queue=7492702, util=99.89% 00:14:51.071 14:09:53 -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:14:51.071 14:09:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.071 14:09:53 -- common/autotest_common.sh@10 -- # set +x 00:14:51.071 [2024-12-08 14:09:53.974802] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:51.329 [2024-12-08 14:09:54.015022] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:51.329 [2024-12-08 14:09:54.015186] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:51.329 [2024-12-08 14:09:54.017235] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:51.329 [2024-12-08 14:09:54.021083] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:51.329 [2024-12-08 14:09:54.021097] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:51.329 14:09:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.329 14:09:54 -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:14:51.329 14:09:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.329 14:09:54 -- common/autotest_common.sh@10 -- # set +x 00:14:51.329 [2024-12-08 14:09:54.032073] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:14:51.329 [2024-12-08 14:09:54.040001] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:14:51.329 [2024-12-08 14:09:54.040032] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:51.329 14:09:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.329 14:09:54 -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:14:51.329 14:09:54 -- ublk/ublk_recovery.sh@59 -- # cleanup 00:14:51.329 14:09:54 -- ublk/ublk_recovery.sh@14 -- # killprocess 69748 00:14:51.329 14:09:54 -- common/autotest_common.sh@936 -- # '[' -z 69748 ']' 00:14:51.329 14:09:54 -- common/autotest_common.sh@940 -- # kill -0 69748 00:14:51.329 14:09:54 -- common/autotest_common.sh@941 -- # uname 00:14:51.329 14:09:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:51.329 14:09:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69748 00:14:51.330 14:09:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:51.330 killing process with pid 69748 00:14:51.330 14:09:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:51.330 14:09:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69748' 00:14:51.330 14:09:54 -- common/autotest_common.sh@955 -- # kill 69748 00:14:51.330 14:09:54 -- common/autotest_common.sh@960 -- # wait 69748 00:14:52.273 [2024-12-08 14:09:55.132803] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:14:52.273 [2024-12-08 14:09:55.132849] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:14:53.209 ************************************ 00:14:53.209 END TEST ublk_recovery 00:14:53.209 ************************************ 00:14:53.209 00:14:53.209 real 1m5.047s 00:14:53.209 user 1m48.354s 00:14:53.209 sys 0m22.395s 00:14:53.209 14:09:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:53.209 14:09:55 -- common/autotest_common.sh@10 -- # set +x 00:14:53.209 14:09:55 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:14:53.209 14:09:55 -- spdk/autotest.sh@255 -- # timing_exit lib 00:14:53.209 14:09:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:53.209 14:09:55 -- common/autotest_common.sh@10 -- # set +x 00:14:53.209 14:09:55 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:14:53.209 14:09:55 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:14:53.209 14:09:55 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:14:53.209 14:09:55 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:14:53.209 14:09:55 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:14:53.209 14:09:55 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:14:53.209 14:09:55 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:14:53.209 14:09:55 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:14:53.209 14:09:55 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:14:53.209 14:09:55 -- spdk/autotest.sh@329 -- # '[' 1 -eq 1 ']' 00:14:53.209 14:09:55 -- spdk/autotest.sh@330 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:53.209 14:09:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:53.209 14:09:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:53.209 14:09:56 -- common/autotest_common.sh@10 -- # set +x 00:14:53.209 ************************************ 00:14:53.209 START TEST ftl 00:14:53.209 ************************************ 00:14:53.209 14:09:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:53.209 * Looking for test storage... 00:14:53.209 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:14:53.209 14:09:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:53.209 14:09:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:53.209 14:09:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:53.470 14:09:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:53.470 14:09:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:53.470 14:09:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:53.470 14:09:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:53.470 14:09:56 -- scripts/common.sh@335 -- # IFS=.-: 00:14:53.470 14:09:56 -- scripts/common.sh@335 -- # read -ra ver1 00:14:53.470 14:09:56 -- scripts/common.sh@336 -- # IFS=.-: 00:14:53.470 14:09:56 -- scripts/common.sh@336 -- # read -ra ver2 00:14:53.470 14:09:56 -- scripts/common.sh@337 -- # local 'op=<' 00:14:53.470 14:09:56 -- scripts/common.sh@339 -- # ver1_l=2 00:14:53.470 14:09:56 -- scripts/common.sh@340 -- # ver2_l=1 00:14:53.470 14:09:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:53.470 14:09:56 -- scripts/common.sh@343 -- # case "$op" in 00:14:53.470 14:09:56 -- scripts/common.sh@344 -- # : 1 00:14:53.470 14:09:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:53.470 14:09:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:53.470 14:09:56 -- scripts/common.sh@364 -- # decimal 1 00:14:53.470 14:09:56 -- scripts/common.sh@352 -- # local d=1 00:14:53.470 14:09:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:53.470 14:09:56 -- scripts/common.sh@354 -- # echo 1 00:14:53.470 14:09:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:53.470 14:09:56 -- scripts/common.sh@365 -- # decimal 2 00:14:53.470 14:09:56 -- scripts/common.sh@352 -- # local d=2 00:14:53.470 14:09:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:53.470 14:09:56 -- scripts/common.sh@354 -- # echo 2 00:14:53.470 14:09:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:53.470 14:09:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:53.470 14:09:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:53.470 14:09:56 -- scripts/common.sh@367 -- # return 0 00:14:53.470 14:09:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:53.470 14:09:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:53.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.470 --rc genhtml_branch_coverage=1 00:14:53.470 --rc genhtml_function_coverage=1 00:14:53.470 --rc genhtml_legend=1 00:14:53.470 --rc geninfo_all_blocks=1 00:14:53.470 --rc geninfo_unexecuted_blocks=1 00:14:53.470 00:14:53.470 ' 00:14:53.470 14:09:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:53.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.470 --rc genhtml_branch_coverage=1 00:14:53.470 --rc genhtml_function_coverage=1 00:14:53.470 --rc genhtml_legend=1 00:14:53.470 --rc geninfo_all_blocks=1 00:14:53.470 --rc geninfo_unexecuted_blocks=1 00:14:53.470 00:14:53.470 ' 00:14:53.470 14:09:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:53.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.470 --rc genhtml_branch_coverage=1 00:14:53.470 --rc genhtml_function_coverage=1 00:14:53.470 --rc genhtml_legend=1 00:14:53.470 --rc geninfo_all_blocks=1 00:14:53.470 --rc geninfo_unexecuted_blocks=1 00:14:53.470 00:14:53.470 ' 00:14:53.470 14:09:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:53.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.470 --rc genhtml_branch_coverage=1 00:14:53.470 --rc genhtml_function_coverage=1 00:14:53.470 --rc genhtml_legend=1 00:14:53.470 --rc geninfo_all_blocks=1 00:14:53.470 --rc geninfo_unexecuted_blocks=1 00:14:53.470 00:14:53.470 ' 00:14:53.470 14:09:56 -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:14:53.470 14:09:56 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:53.470 14:09:56 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:14:53.470 14:09:56 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:14:53.470 14:09:56 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:14:53.470 14:09:56 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:14:53.470 14:09:56 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:53.470 14:09:56 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:14:53.470 14:09:56 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:14:53.470 14:09:56 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:53.470 14:09:56 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:53.470 14:09:56 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:14:53.470 14:09:56 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:14:53.470 14:09:56 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:53.470 14:09:56 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:53.470 14:09:56 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:14:53.470 14:09:56 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:14:53.470 14:09:56 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:53.470 14:09:56 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:53.470 14:09:56 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:14:53.470 14:09:56 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:14:53.471 14:09:56 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:53.471 14:09:56 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:53.471 14:09:56 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:53.471 14:09:56 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:53.471 14:09:56 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:14:53.471 14:09:56 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:14:53.471 14:09:56 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:53.471 14:09:56 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:53.471 14:09:56 -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:53.471 14:09:56 -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:14:53.471 14:09:56 -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:14:53.471 14:09:56 -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:14:53.471 14:09:56 -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:14:53.471 14:09:56 -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:53.748 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:53.748 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:53.748 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:53.748 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:53.748 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:54.037 14:09:56 -- ftl/ftl.sh@37 -- # spdk_tgt_pid=70558 00:14:54.037 14:09:56 -- ftl/ftl.sh@38 -- # waitforlisten 70558 00:14:54.037 14:09:56 -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:14:54.037 14:09:56 -- common/autotest_common.sh@829 -- # '[' -z 70558 ']' 00:14:54.037 14:09:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:54.037 14:09:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:54.037 14:09:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:54.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:54.037 14:09:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:54.037 14:09:56 -- common/autotest_common.sh@10 -- # set +x 00:14:54.037 [2024-12-08 14:09:56.747909] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:54.037 [2024-12-08 14:09:56.748164] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70558 ] 00:14:54.037 [2024-12-08 14:09:56.896824] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.304 [2024-12-08 14:09:57.110578] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:54.304 [2024-12-08 14:09:57.111020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.875 14:09:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:54.875 14:09:57 -- common/autotest_common.sh@862 -- # return 0 00:14:54.875 14:09:57 -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:14:54.875 14:09:57 -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:14:55.818 14:09:58 -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:14:55.818 14:09:58 -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:56.390 14:09:59 -- ftl/ftl.sh@46 -- # cache_size=1310720 00:14:56.390 14:09:59 -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:14:56.390 14:09:59 -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:14:56.390 14:09:59 -- ftl/ftl.sh@47 -- # cache_disks=0000:00:06.0 00:14:56.390 14:09:59 -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:14:56.390 14:09:59 -- ftl/ftl.sh@49 -- # nv_cache=0000:00:06.0 00:14:56.390 14:09:59 -- ftl/ftl.sh@50 -- # break 00:14:56.390 14:09:59 -- ftl/ftl.sh@53 -- # '[' -z 0000:00:06.0 ']' 00:14:56.390 14:09:59 -- ftl/ftl.sh@59 -- # base_size=1310720 00:14:56.390 14:09:59 -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:06.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:14:56.390 14:09:59 -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:14:56.651 14:09:59 -- ftl/ftl.sh@60 -- # base_disks=0000:00:07.0 00:14:56.651 14:09:59 -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:14:56.651 14:09:59 -- ftl/ftl.sh@62 -- # device=0000:00:07.0 00:14:56.651 14:09:59 -- ftl/ftl.sh@63 -- # break 00:14:56.651 14:09:59 -- ftl/ftl.sh@66 -- # killprocess 70558 00:14:56.651 14:09:59 -- common/autotest_common.sh@936 -- # '[' -z 70558 ']' 00:14:56.651 14:09:59 -- common/autotest_common.sh@940 -- # kill -0 70558 00:14:56.651 14:09:59 -- common/autotest_common.sh@941 -- # uname 00:14:56.651 14:09:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:56.651 14:09:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70558 00:14:56.651 killing process with pid 70558 00:14:56.651 14:09:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:56.651 14:09:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:56.651 14:09:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70558' 00:14:56.651 14:09:59 -- common/autotest_common.sh@955 -- # kill 70558 00:14:56.651 14:09:59 -- common/autotest_common.sh@960 -- # wait 70558 00:14:58.037 14:10:00 -- ftl/ftl.sh@68 -- # '[' -z 0000:00:07.0 ']' 00:14:58.037 14:10:00 -- ftl/ftl.sh@73 -- # [[ -z '' ]] 00:14:58.037 14:10:00 -- ftl/ftl.sh@74 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:07.0 0000:00:06.0 basic 00:14:58.037 14:10:00 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:14:58.037 14:10:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:58.037 14:10:00 -- common/autotest_common.sh@10 -- # set +x 00:14:58.037 ************************************ 00:14:58.037 START TEST ftl_fio_basic 00:14:58.037 ************************************ 00:14:58.037 14:10:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:07.0 0000:00:06.0 basic 00:14:58.037 * Looking for test storage... 00:14:58.037 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:14:58.037 14:10:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:58.037 14:10:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:58.037 14:10:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:58.037 14:10:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:58.037 14:10:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:58.037 14:10:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:58.037 14:10:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:58.037 14:10:00 -- scripts/common.sh@335 -- # IFS=.-: 00:14:58.037 14:10:00 -- scripts/common.sh@335 -- # read -ra ver1 00:14:58.037 14:10:00 -- scripts/common.sh@336 -- # IFS=.-: 00:14:58.037 14:10:00 -- scripts/common.sh@336 -- # read -ra ver2 00:14:58.037 14:10:00 -- scripts/common.sh@337 -- # local 'op=<' 00:14:58.037 14:10:00 -- scripts/common.sh@339 -- # ver1_l=2 00:14:58.037 14:10:00 -- scripts/common.sh@340 -- # ver2_l=1 00:14:58.037 14:10:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:58.037 14:10:00 -- scripts/common.sh@343 -- # case "$op" in 00:14:58.037 14:10:00 -- scripts/common.sh@344 -- # : 1 00:14:58.037 14:10:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:58.037 14:10:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:58.037 14:10:00 -- scripts/common.sh@364 -- # decimal 1 00:14:58.298 14:10:00 -- scripts/common.sh@352 -- # local d=1 00:14:58.299 14:10:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:58.299 14:10:00 -- scripts/common.sh@354 -- # echo 1 00:14:58.299 14:10:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:58.299 14:10:00 -- scripts/common.sh@365 -- # decimal 2 00:14:58.299 14:10:00 -- scripts/common.sh@352 -- # local d=2 00:14:58.299 14:10:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:58.299 14:10:00 -- scripts/common.sh@354 -- # echo 2 00:14:58.299 14:10:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:58.299 14:10:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:58.299 14:10:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:58.299 14:10:00 -- scripts/common.sh@367 -- # return 0 00:14:58.299 14:10:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:58.299 14:10:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:58.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:58.299 --rc genhtml_branch_coverage=1 00:14:58.299 --rc genhtml_function_coverage=1 00:14:58.299 --rc genhtml_legend=1 00:14:58.299 --rc geninfo_all_blocks=1 00:14:58.299 --rc geninfo_unexecuted_blocks=1 00:14:58.299 00:14:58.299 ' 00:14:58.299 14:10:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:58.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:58.299 --rc genhtml_branch_coverage=1 00:14:58.299 --rc genhtml_function_coverage=1 00:14:58.299 --rc genhtml_legend=1 00:14:58.299 --rc geninfo_all_blocks=1 00:14:58.299 --rc geninfo_unexecuted_blocks=1 00:14:58.299 00:14:58.299 ' 00:14:58.299 14:10:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:58.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:58.299 --rc genhtml_branch_coverage=1 00:14:58.299 --rc genhtml_function_coverage=1 00:14:58.299 --rc genhtml_legend=1 00:14:58.299 --rc geninfo_all_blocks=1 00:14:58.299 --rc geninfo_unexecuted_blocks=1 00:14:58.299 00:14:58.299 ' 00:14:58.299 14:10:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:58.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:58.299 --rc genhtml_branch_coverage=1 00:14:58.299 --rc genhtml_function_coverage=1 00:14:58.299 --rc genhtml_legend=1 00:14:58.299 --rc geninfo_all_blocks=1 00:14:58.299 --rc geninfo_unexecuted_blocks=1 00:14:58.299 00:14:58.299 ' 00:14:58.299 14:10:00 -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:14:58.299 14:10:00 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:14:58.299 14:10:00 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:14:58.299 14:10:00 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:14:58.299 14:10:00 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:14:58.299 14:10:00 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:14:58.299 14:10:00 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:58.299 14:10:00 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:14:58.299 14:10:00 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:14:58.299 14:10:00 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:58.299 14:10:00 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:58.299 14:10:00 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:14:58.299 14:10:00 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:14:58.299 14:10:00 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:58.299 14:10:00 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:58.299 14:10:00 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:14:58.299 14:10:00 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:14:58.299 14:10:00 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:58.299 14:10:00 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:58.299 14:10:00 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:14:58.299 14:10:00 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:14:58.299 14:10:00 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:58.299 14:10:00 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:58.299 14:10:00 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:58.299 14:10:00 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:58.299 14:10:00 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:14:58.299 14:10:00 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:14:58.299 14:10:00 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:58.299 14:10:00 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:58.299 14:10:00 -- ftl/fio.sh@11 -- # declare -A suite 00:14:58.299 14:10:00 -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:14:58.299 14:10:00 -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:14:58.299 14:10:00 -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:14:58.299 14:10:00 -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:58.299 14:10:00 -- ftl/fio.sh@23 -- # device=0000:00:07.0 00:14:58.299 14:10:00 -- ftl/fio.sh@24 -- # cache_device=0000:00:06.0 00:14:58.299 14:10:00 -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:14:58.299 14:10:00 -- ftl/fio.sh@26 -- # uuid= 00:14:58.299 14:10:00 -- ftl/fio.sh@27 -- # timeout=240 00:14:58.299 14:10:00 -- ftl/fio.sh@29 -- # [[ y != y ]] 00:14:58.299 14:10:00 -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:14:58.299 14:10:00 -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:14:58.299 14:10:00 -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:14:58.299 14:10:00 -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:14:58.299 14:10:00 -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:14:58.299 14:10:00 -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:14:58.299 14:10:00 -- ftl/fio.sh@45 -- # svcpid=70691 00:14:58.299 14:10:00 -- ftl/fio.sh@46 -- # waitforlisten 70691 00:14:58.299 14:10:00 -- common/autotest_common.sh@829 -- # '[' -z 70691 ']' 00:14:58.299 14:10:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.299 14:10:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:58.299 14:10:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.299 14:10:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:58.299 14:10:00 -- common/autotest_common.sh@10 -- # set +x 00:14:58.299 14:10:00 -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:14:58.299 [2024-12-08 14:10:01.060764] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:58.299 [2024-12-08 14:10:01.061125] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70691 ] 00:14:58.299 [2024-12-08 14:10:01.212024] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:58.559 [2024-12-08 14:10:01.397762] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:58.559 [2024-12-08 14:10:01.398312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:58.559 [2024-12-08 14:10:01.398447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.559 [2024-12-08 14:10:01.398466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:59.938 14:10:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:59.938 14:10:02 -- common/autotest_common.sh@862 -- # return 0 00:14:59.938 14:10:02 -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:14:59.938 14:10:02 -- ftl/common.sh@54 -- # local name=nvme0 00:14:59.938 14:10:02 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:14:59.938 14:10:02 -- ftl/common.sh@56 -- # local size=103424 00:14:59.938 14:10:02 -- ftl/common.sh@59 -- # local base_bdev 00:14:59.938 14:10:02 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:14:59.938 14:10:02 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:14:59.938 14:10:02 -- ftl/common.sh@62 -- # local base_size 00:14:59.938 14:10:02 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:14:59.938 14:10:02 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:14:59.938 14:10:02 -- common/autotest_common.sh@1368 -- # local bdev_info 00:14:59.938 14:10:02 -- common/autotest_common.sh@1369 -- # local bs 00:14:59.938 14:10:02 -- common/autotest_common.sh@1370 -- # local nb 00:14:59.938 14:10:02 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:00.198 14:10:02 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:00.198 { 00:15:00.198 "name": "nvme0n1", 00:15:00.198 "aliases": [ 00:15:00.198 "d52cee98-89ee-474b-9ba0-9134b5134b78" 00:15:00.198 ], 00:15:00.198 "product_name": "NVMe disk", 00:15:00.198 "block_size": 4096, 00:15:00.198 "num_blocks": 1310720, 00:15:00.198 "uuid": "d52cee98-89ee-474b-9ba0-9134b5134b78", 00:15:00.198 "assigned_rate_limits": { 00:15:00.198 "rw_ios_per_sec": 0, 00:15:00.198 "rw_mbytes_per_sec": 0, 00:15:00.198 "r_mbytes_per_sec": 0, 00:15:00.198 "w_mbytes_per_sec": 0 00:15:00.198 }, 00:15:00.198 "claimed": false, 00:15:00.198 "zoned": false, 00:15:00.198 "supported_io_types": { 00:15:00.198 "read": true, 00:15:00.198 "write": true, 00:15:00.198 "unmap": true, 00:15:00.198 "write_zeroes": true, 00:15:00.198 "flush": true, 00:15:00.198 "reset": true, 00:15:00.198 "compare": true, 00:15:00.198 "compare_and_write": false, 00:15:00.198 "abort": true, 00:15:00.198 "nvme_admin": true, 00:15:00.198 "nvme_io": true 00:15:00.198 }, 00:15:00.198 "driver_specific": { 00:15:00.198 "nvme": [ 00:15:00.198 { 00:15:00.198 "pci_address": "0000:00:07.0", 00:15:00.198 "trid": { 00:15:00.198 "trtype": "PCIe", 00:15:00.198 "traddr": "0000:00:07.0" 00:15:00.198 }, 00:15:00.198 "ctrlr_data": { 00:15:00.198 "cntlid": 0, 00:15:00.198 "vendor_id": "0x1b36", 00:15:00.198 "model_number": "QEMU NVMe Ctrl", 00:15:00.198 "serial_number": "12341", 00:15:00.198 "firmware_revision": "8.0.0", 00:15:00.198 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:00.198 "oacs": { 00:15:00.198 "security": 0, 00:15:00.198 "format": 1, 00:15:00.198 "firmware": 0, 00:15:00.198 "ns_manage": 1 00:15:00.198 }, 00:15:00.198 "multi_ctrlr": false, 00:15:00.198 "ana_reporting": false 00:15:00.198 }, 00:15:00.198 "vs": { 00:15:00.198 "nvme_version": "1.4" 00:15:00.198 }, 00:15:00.198 "ns_data": { 00:15:00.198 "id": 1, 00:15:00.198 "can_share": false 00:15:00.198 } 00:15:00.198 } 00:15:00.198 ], 00:15:00.198 "mp_policy": "active_passive" 00:15:00.198 } 00:15:00.198 } 00:15:00.198 ]' 00:15:00.198 14:10:02 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:00.198 14:10:03 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:00.198 14:10:03 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:00.198 14:10:03 -- common/autotest_common.sh@1373 -- # nb=1310720 00:15:00.198 14:10:03 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:15:00.198 14:10:03 -- common/autotest_common.sh@1377 -- # echo 5120 00:15:00.198 14:10:03 -- ftl/common.sh@63 -- # base_size=5120 00:15:00.198 14:10:03 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:00.198 14:10:03 -- ftl/common.sh@67 -- # clear_lvols 00:15:00.198 14:10:03 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:00.198 14:10:03 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:00.458 14:10:03 -- ftl/common.sh@28 -- # stores= 00:15:00.459 14:10:03 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:00.718 14:10:03 -- ftl/common.sh@68 -- # lvs=50b5d0bd-b645-4ade-ab97-4200d287fc08 00:15:00.718 14:10:03 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 50b5d0bd-b645-4ade-ab97-4200d287fc08 00:15:00.977 14:10:03 -- ftl/fio.sh@48 -- # split_bdev=c8f845bc-cca4-452c-bb05-1e73c034115c 00:15:00.977 14:10:03 -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:06.0 c8f845bc-cca4-452c-bb05-1e73c034115c 00:15:00.977 14:10:03 -- ftl/common.sh@35 -- # local name=nvc0 00:15:00.977 14:10:03 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:15:00.977 14:10:03 -- ftl/common.sh@37 -- # local base_bdev=c8f845bc-cca4-452c-bb05-1e73c034115c 00:15:00.977 14:10:03 -- ftl/common.sh@38 -- # local cache_size= 00:15:00.977 14:10:03 -- ftl/common.sh@41 -- # get_bdev_size c8f845bc-cca4-452c-bb05-1e73c034115c 00:15:00.977 14:10:03 -- common/autotest_common.sh@1367 -- # local bdev_name=c8f845bc-cca4-452c-bb05-1e73c034115c 00:15:00.977 14:10:03 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:00.977 14:10:03 -- common/autotest_common.sh@1369 -- # local bs 00:15:00.977 14:10:03 -- common/autotest_common.sh@1370 -- # local nb 00:15:00.977 14:10:03 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c8f845bc-cca4-452c-bb05-1e73c034115c 00:15:00.977 14:10:03 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:00.977 { 00:15:00.977 "name": "c8f845bc-cca4-452c-bb05-1e73c034115c", 00:15:00.977 "aliases": [ 00:15:00.978 "lvs/nvme0n1p0" 00:15:00.978 ], 00:15:00.978 "product_name": "Logical Volume", 00:15:00.978 "block_size": 4096, 00:15:00.978 "num_blocks": 26476544, 00:15:00.978 "uuid": "c8f845bc-cca4-452c-bb05-1e73c034115c", 00:15:00.978 "assigned_rate_limits": { 00:15:00.978 "rw_ios_per_sec": 0, 00:15:00.978 "rw_mbytes_per_sec": 0, 00:15:00.978 "r_mbytes_per_sec": 0, 00:15:00.978 "w_mbytes_per_sec": 0 00:15:00.978 }, 00:15:00.978 "claimed": false, 00:15:00.978 "zoned": false, 00:15:00.978 "supported_io_types": { 00:15:00.978 "read": true, 00:15:00.978 "write": true, 00:15:00.978 "unmap": true, 00:15:00.978 "write_zeroes": true, 00:15:00.978 "flush": false, 00:15:00.978 "reset": true, 00:15:00.978 "compare": false, 00:15:00.978 "compare_and_write": false, 00:15:00.978 "abort": false, 00:15:00.978 "nvme_admin": false, 00:15:00.978 "nvme_io": false 00:15:00.978 }, 00:15:00.978 "driver_specific": { 00:15:00.978 "lvol": { 00:15:00.978 "lvol_store_uuid": "50b5d0bd-b645-4ade-ab97-4200d287fc08", 00:15:00.978 "base_bdev": "nvme0n1", 00:15:00.978 "thin_provision": true, 00:15:00.978 "snapshot": false, 00:15:00.978 "clone": false, 00:15:00.978 "esnap_clone": false 00:15:00.978 } 00:15:00.978 } 00:15:00.978 } 00:15:00.978 ]' 00:15:00.978 14:10:03 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:00.978 14:10:03 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:00.978 14:10:03 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:00.978 14:10:03 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:00.978 14:10:03 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:00.978 14:10:03 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:00.978 14:10:03 -- ftl/common.sh@41 -- # local base_size=5171 00:15:00.978 14:10:03 -- ftl/common.sh@44 -- # local nvc_bdev 00:15:00.978 14:10:03 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:15:01.236 14:10:04 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:01.236 14:10:04 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:01.236 14:10:04 -- ftl/common.sh@48 -- # get_bdev_size c8f845bc-cca4-452c-bb05-1e73c034115c 00:15:01.236 14:10:04 -- common/autotest_common.sh@1367 -- # local bdev_name=c8f845bc-cca4-452c-bb05-1e73c034115c 00:15:01.236 14:10:04 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:01.236 14:10:04 -- common/autotest_common.sh@1369 -- # local bs 00:15:01.236 14:10:04 -- common/autotest_common.sh@1370 -- # local nb 00:15:01.236 14:10:04 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c8f845bc-cca4-452c-bb05-1e73c034115c 00:15:01.492 14:10:04 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:01.492 { 00:15:01.492 "name": "c8f845bc-cca4-452c-bb05-1e73c034115c", 00:15:01.492 "aliases": [ 00:15:01.492 "lvs/nvme0n1p0" 00:15:01.492 ], 00:15:01.492 "product_name": "Logical Volume", 00:15:01.492 "block_size": 4096, 00:15:01.492 "num_blocks": 26476544, 00:15:01.492 "uuid": "c8f845bc-cca4-452c-bb05-1e73c034115c", 00:15:01.492 "assigned_rate_limits": { 00:15:01.492 "rw_ios_per_sec": 0, 00:15:01.492 "rw_mbytes_per_sec": 0, 00:15:01.492 "r_mbytes_per_sec": 0, 00:15:01.492 "w_mbytes_per_sec": 0 00:15:01.492 }, 00:15:01.492 "claimed": false, 00:15:01.492 "zoned": false, 00:15:01.492 "supported_io_types": { 00:15:01.492 "read": true, 00:15:01.492 "write": true, 00:15:01.492 "unmap": true, 00:15:01.492 "write_zeroes": true, 00:15:01.492 "flush": false, 00:15:01.492 "reset": true, 00:15:01.492 "compare": false, 00:15:01.492 "compare_and_write": false, 00:15:01.492 "abort": false, 00:15:01.492 "nvme_admin": false, 00:15:01.492 "nvme_io": false 00:15:01.492 }, 00:15:01.492 "driver_specific": { 00:15:01.492 "lvol": { 00:15:01.492 "lvol_store_uuid": "50b5d0bd-b645-4ade-ab97-4200d287fc08", 00:15:01.492 "base_bdev": "nvme0n1", 00:15:01.492 "thin_provision": true, 00:15:01.492 "snapshot": false, 00:15:01.492 "clone": false, 00:15:01.492 "esnap_clone": false 00:15:01.492 } 00:15:01.492 } 00:15:01.492 } 00:15:01.492 ]' 00:15:01.492 14:10:04 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:01.492 14:10:04 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:01.492 14:10:04 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:01.492 14:10:04 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:01.492 14:10:04 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:01.492 14:10:04 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:01.492 14:10:04 -- ftl/common.sh@48 -- # cache_size=5171 00:15:01.492 14:10:04 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:01.748 14:10:04 -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:15:01.748 14:10:04 -- ftl/fio.sh@51 -- # l2p_percentage=60 00:15:01.748 14:10:04 -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:15:01.748 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:15:01.748 14:10:04 -- ftl/fio.sh@56 -- # get_bdev_size c8f845bc-cca4-452c-bb05-1e73c034115c 00:15:01.748 14:10:04 -- common/autotest_common.sh@1367 -- # local bdev_name=c8f845bc-cca4-452c-bb05-1e73c034115c 00:15:01.748 14:10:04 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:01.748 14:10:04 -- common/autotest_common.sh@1369 -- # local bs 00:15:01.748 14:10:04 -- common/autotest_common.sh@1370 -- # local nb 00:15:01.748 14:10:04 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c8f845bc-cca4-452c-bb05-1e73c034115c 00:15:02.006 14:10:04 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:02.006 { 00:15:02.006 "name": "c8f845bc-cca4-452c-bb05-1e73c034115c", 00:15:02.006 "aliases": [ 00:15:02.006 "lvs/nvme0n1p0" 00:15:02.006 ], 00:15:02.006 "product_name": "Logical Volume", 00:15:02.006 "block_size": 4096, 00:15:02.006 "num_blocks": 26476544, 00:15:02.006 "uuid": "c8f845bc-cca4-452c-bb05-1e73c034115c", 00:15:02.006 "assigned_rate_limits": { 00:15:02.006 "rw_ios_per_sec": 0, 00:15:02.006 "rw_mbytes_per_sec": 0, 00:15:02.006 "r_mbytes_per_sec": 0, 00:15:02.006 "w_mbytes_per_sec": 0 00:15:02.006 }, 00:15:02.006 "claimed": false, 00:15:02.006 "zoned": false, 00:15:02.006 "supported_io_types": { 00:15:02.006 "read": true, 00:15:02.006 "write": true, 00:15:02.006 "unmap": true, 00:15:02.006 "write_zeroes": true, 00:15:02.006 "flush": false, 00:15:02.006 "reset": true, 00:15:02.006 "compare": false, 00:15:02.006 "compare_and_write": false, 00:15:02.006 "abort": false, 00:15:02.006 "nvme_admin": false, 00:15:02.006 "nvme_io": false 00:15:02.006 }, 00:15:02.006 "driver_specific": { 00:15:02.006 "lvol": { 00:15:02.006 "lvol_store_uuid": "50b5d0bd-b645-4ade-ab97-4200d287fc08", 00:15:02.006 "base_bdev": "nvme0n1", 00:15:02.006 "thin_provision": true, 00:15:02.006 "snapshot": false, 00:15:02.006 "clone": false, 00:15:02.006 "esnap_clone": false 00:15:02.006 } 00:15:02.006 } 00:15:02.006 } 00:15:02.006 ]' 00:15:02.006 14:10:04 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:02.006 14:10:04 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:02.006 14:10:04 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:02.006 14:10:04 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:02.006 14:10:04 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:02.006 14:10:04 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:02.006 14:10:04 -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:15:02.006 14:10:04 -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:15:02.006 14:10:04 -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d c8f845bc-cca4-452c-bb05-1e73c034115c -c nvc0n1p0 --l2p_dram_limit 60 00:15:02.264 [2024-12-08 14:10:05.000871] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:02.264 [2024-12-08 14:10:05.001002] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:02.264 [2024-12-08 14:10:05.001023] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:15:02.264 [2024-12-08 14:10:05.001031] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:02.264 [2024-12-08 14:10:05.001090] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:02.264 [2024-12-08 14:10:05.001101] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:02.264 [2024-12-08 14:10:05.001110] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:15:02.264 [2024-12-08 14:10:05.001117] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:02.264 [2024-12-08 14:10:05.001137] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:02.264 [2024-12-08 14:10:05.001706] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:02.264 [2024-12-08 14:10:05.001728] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:02.264 [2024-12-08 14:10:05.001736] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:02.264 [2024-12-08 14:10:05.001744] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.592 ms 00:15:02.264 [2024-12-08 14:10:05.001752] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:02.264 [2024-12-08 14:10:05.001804] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 99079fe7-7f50-480c-b958-e070120912f8 00:15:02.264 [2024-12-08 14:10:05.003127] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:02.264 [2024-12-08 14:10:05.003156] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:02.264 [2024-12-08 14:10:05.003166] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:15:02.264 [2024-12-08 14:10:05.003175] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:02.264 [2024-12-08 14:10:05.010029] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:02.264 [2024-12-08 14:10:05.010057] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:02.264 [2024-12-08 14:10:05.010066] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.805 ms 00:15:02.264 [2024-12-08 14:10:05.010074] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:02.264 [2024-12-08 14:10:05.010141] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:02.264 [2024-12-08 14:10:05.010150] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:02.264 [2024-12-08 14:10:05.010157] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:15:02.264 [2024-12-08 14:10:05.010166] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:02.264 [2024-12-08 14:10:05.010215] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:02.264 [2024-12-08 14:10:05.010225] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:02.264 [2024-12-08 14:10:05.010232] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:15:02.264 [2024-12-08 14:10:05.010241] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:02.264 [2024-12-08 14:10:05.010261] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:02.264 [2024-12-08 14:10:05.013625] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:02.264 [2024-12-08 14:10:05.013648] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:02.264 [2024-12-08 14:10:05.013658] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.367 ms 00:15:02.264 [2024-12-08 14:10:05.013664] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:02.264 [2024-12-08 14:10:05.013697] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:02.264 [2024-12-08 14:10:05.013704] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:02.264 [2024-12-08 14:10:05.013712] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:15:02.264 [2024-12-08 14:10:05.013718] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:02.264 [2024-12-08 14:10:05.013735] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:02.264 [2024-12-08 14:10:05.013827] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:15:02.264 [2024-12-08 14:10:05.013844] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:02.264 [2024-12-08 14:10:05.013853] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:15:02.264 [2024-12-08 14:10:05.013862] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:02.264 [2024-12-08 14:10:05.013870] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:02.264 [2024-12-08 14:10:05.013879] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:02.264 [2024-12-08 14:10:05.013885] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:02.264 [2024-12-08 14:10:05.013894] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:15:02.264 [2024-12-08 14:10:05.013900] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:15:02.264 [2024-12-08 14:10:05.013909] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:02.264 [2024-12-08 14:10:05.013915] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:02.264 [2024-12-08 14:10:05.013922] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.175 ms 00:15:02.264 [2024-12-08 14:10:05.013929] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:02.264 [2024-12-08 14:10:05.013991] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:02.264 [2024-12-08 14:10:05.013999] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:02.264 [2024-12-08 14:10:05.014007] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:15:02.264 [2024-12-08 14:10:05.014014] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:02.264 [2024-12-08 14:10:05.014080] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:02.264 [2024-12-08 14:10:05.014089] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:02.264 [2024-12-08 14:10:05.014097] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:02.264 [2024-12-08 14:10:05.014103] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:02.264 [2024-12-08 14:10:05.014111] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:02.264 [2024-12-08 14:10:05.014116] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:02.264 [2024-12-08 14:10:05.014123] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:02.264 [2024-12-08 14:10:05.014129] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:02.264 [2024-12-08 14:10:05.014136] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:02.264 [2024-12-08 14:10:05.014141] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:02.264 [2024-12-08 14:10:05.014149] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:02.264 [2024-12-08 14:10:05.014155] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:02.264 [2024-12-08 14:10:05.014161] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:02.264 [2024-12-08 14:10:05.014166] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:02.264 [2024-12-08 14:10:05.014173] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:15:02.264 [2024-12-08 14:10:05.014179] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:02.264 [2024-12-08 14:10:05.014187] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:02.264 [2024-12-08 14:10:05.014192] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:15:02.264 [2024-12-08 14:10:05.014199] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:02.264 [2024-12-08 14:10:05.014204] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:15:02.264 [2024-12-08 14:10:05.014210] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:15:02.264 [2024-12-08 14:10:05.014216] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:15:02.264 [2024-12-08 14:10:05.014222] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:02.265 [2024-12-08 14:10:05.014228] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:02.265 [2024-12-08 14:10:05.014235] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:02.265 [2024-12-08 14:10:05.014240] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:02.265 [2024-12-08 14:10:05.014247] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:15:02.265 [2024-12-08 14:10:05.014253] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:02.265 [2024-12-08 14:10:05.014260] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:02.265 [2024-12-08 14:10:05.014265] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:02.265 [2024-12-08 14:10:05.014272] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:02.265 [2024-12-08 14:10:05.014277] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:02.265 [2024-12-08 14:10:05.014285] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:15:02.265 [2024-12-08 14:10:05.014300] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:02.265 [2024-12-08 14:10:05.014308] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:02.265 [2024-12-08 14:10:05.014314] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:02.265 [2024-12-08 14:10:05.014321] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:02.265 [2024-12-08 14:10:05.014327] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:02.265 [2024-12-08 14:10:05.014334] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:15:02.265 [2024-12-08 14:10:05.014343] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:02.265 [2024-12-08 14:10:05.014350] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:02.265 [2024-12-08 14:10:05.014356] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:02.265 [2024-12-08 14:10:05.014364] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:02.265 [2024-12-08 14:10:05.014370] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:02.265 [2024-12-08 14:10:05.014378] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:02.265 [2024-12-08 14:10:05.014384] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:02.265 [2024-12-08 14:10:05.014390] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:02.265 [2024-12-08 14:10:05.014396] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:02.265 [2024-12-08 14:10:05.014404] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:02.265 [2024-12-08 14:10:05.014410] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:02.265 [2024-12-08 14:10:05.014417] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:02.265 [2024-12-08 14:10:05.014425] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:02.265 [2024-12-08 14:10:05.014434] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:02.265 [2024-12-08 14:10:05.014440] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:15:02.265 [2024-12-08 14:10:05.014447] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:15:02.265 [2024-12-08 14:10:05.014453] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:15:02.265 [2024-12-08 14:10:05.014460] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:15:02.265 [2024-12-08 14:10:05.014465] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:15:02.265 [2024-12-08 14:10:05.014475] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:15:02.265 [2024-12-08 14:10:05.014481] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:15:02.265 [2024-12-08 14:10:05.014488] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:15:02.265 [2024-12-08 14:10:05.014494] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:15:02.265 [2024-12-08 14:10:05.014502] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:15:02.265 [2024-12-08 14:10:05.014508] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:15:02.265 [2024-12-08 14:10:05.014517] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:15:02.265 [2024-12-08 14:10:05.014522] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:02.265 [2024-12-08 14:10:05.014530] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:02.265 [2024-12-08 14:10:05.014538] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:02.265 [2024-12-08 14:10:05.014545] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:02.265 [2024-12-08 14:10:05.014551] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:02.265 [2024-12-08 14:10:05.014558] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:02.265 [2024-12-08 14:10:05.014566] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:02.265 [2024-12-08 14:10:05.014573] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:02.265 [2024-12-08 14:10:05.014580] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.524 ms 00:15:02.265 [2024-12-08 14:10:05.014587] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:02.265 [2024-12-08 14:10:05.028529] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:02.265 [2024-12-08 14:10:05.028653] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:02.265 [2024-12-08 14:10:05.028667] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.893 ms 00:15:02.265 [2024-12-08 14:10:05.028676] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:02.265 [2024-12-08 14:10:05.028749] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:02.265 [2024-12-08 14:10:05.028764] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:02.265 [2024-12-08 14:10:05.028772] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:15:02.265 [2024-12-08 14:10:05.028780] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:02.265 [2024-12-08 14:10:05.057053] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:02.265 [2024-12-08 14:10:05.057149] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:02.265 [2024-12-08 14:10:05.057196] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.234 ms 00:15:02.265 [2024-12-08 14:10:05.057217] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:02.265 [2024-12-08 14:10:05.057255] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:02.265 [2024-12-08 14:10:05.057285] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:02.265 [2024-12-08 14:10:05.057302] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:02.265 [2024-12-08 14:10:05.057320] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:02.265 [2024-12-08 14:10:05.057729] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:02.265 [2024-12-08 14:10:05.057820] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:02.265 [2024-12-08 14:10:05.057866] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.359 ms 00:15:02.265 [2024-12-08 14:10:05.057887] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:02.265 [2024-12-08 14:10:05.058016] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:02.265 [2024-12-08 14:10:05.058044] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:02.265 [2024-12-08 14:10:05.058092] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:15:02.265 [2024-12-08 14:10:05.058112] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:02.265 [2024-12-08 14:10:05.086298] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:02.265 [2024-12-08 14:10:05.086412] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:02.265 [2024-12-08 14:10:05.086464] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.155 ms 00:15:02.265 [2024-12-08 14:10:05.086486] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:02.265 [2024-12-08 14:10:05.096367] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:02.265 [2024-12-08 14:10:05.111888] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:02.265 [2024-12-08 14:10:05.111988] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:02.265 [2024-12-08 14:10:05.112032] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.301 ms 00:15:02.265 [2024-12-08 14:10:05.112056] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:02.265 [2024-12-08 14:10:05.164263] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:02.265 [2024-12-08 14:10:05.164365] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:02.265 [2024-12-08 14:10:05.164415] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.167 ms 00:15:02.265 [2024-12-08 14:10:05.164435] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:02.265 [2024-12-08 14:10:05.164480] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:15:02.265 [2024-12-08 14:10:05.164513] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:15:05.543 [2024-12-08 14:10:07.921152] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:05.543 [2024-12-08 14:10:07.921340] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:05.543 [2024-12-08 14:10:07.921460] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2756.661 ms 00:15:05.543 [2024-12-08 14:10:07.921488] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:05.543 [2024-12-08 14:10:07.921760] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:05.543 [2024-12-08 14:10:07.921946] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:05.543 [2024-12-08 14:10:07.921990] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.156 ms 00:15:05.543 [2024-12-08 14:10:07.922018] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:05.543 [2024-12-08 14:10:07.945918] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:05.543 [2024-12-08 14:10:07.946095] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:05.543 [2024-12-08 14:10:07.946165] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.831 ms 00:15:05.543 [2024-12-08 14:10:07.946196] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:05.543 [2024-12-08 14:10:07.968775] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:05.543 [2024-12-08 14:10:07.968886] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:05.543 [2024-12-08 14:10:07.968966] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.527 ms 00:15:05.543 [2024-12-08 14:10:07.968999] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:05.543 [2024-12-08 14:10:07.969338] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:05.543 [2024-12-08 14:10:07.969371] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:05.543 [2024-12-08 14:10:07.969395] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:15:05.543 [2024-12-08 14:10:07.969456] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:05.543 [2024-12-08 14:10:08.030451] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:05.543 [2024-12-08 14:10:08.030574] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:05.543 [2024-12-08 14:10:08.030641] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.936 ms 00:15:05.543 [2024-12-08 14:10:08.030670] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:05.543 [2024-12-08 14:10:08.055418] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:05.543 [2024-12-08 14:10:08.055523] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:05.543 [2024-12-08 14:10:08.055586] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.698 ms 00:15:05.543 [2024-12-08 14:10:08.055614] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:05.543 [2024-12-08 14:10:08.059929] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:05.543 [2024-12-08 14:10:08.060051] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:15:05.543 [2024-12-08 14:10:08.060163] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.264 ms 00:15:05.543 [2024-12-08 14:10:08.060192] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:05.543 [2024-12-08 14:10:08.083514] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:05.543 [2024-12-08 14:10:08.083621] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:05.543 [2024-12-08 14:10:08.083680] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.229 ms 00:15:05.543 [2024-12-08 14:10:08.083708] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:05.543 [2024-12-08 14:10:08.083792] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:05.543 [2024-12-08 14:10:08.083823] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:05.543 [2024-12-08 14:10:08.083846] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:15:05.543 [2024-12-08 14:10:08.083866] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:05.543 [2024-12-08 14:10:08.084024] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:05.543 [2024-12-08 14:10:08.084061] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:05.543 [2024-12-08 14:10:08.084086] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:15:05.543 [2024-12-08 14:10:08.084194] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:05.543 [2024-12-08 14:10:08.085233] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3083.879 ms, result 0 00:15:05.543 { 00:15:05.543 "name": "ftl0", 00:15:05.543 "uuid": "99079fe7-7f50-480c-b958-e070120912f8" 00:15:05.543 } 00:15:05.543 14:10:08 -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:15:05.543 14:10:08 -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:15:05.543 14:10:08 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:05.543 14:10:08 -- common/autotest_common.sh@899 -- # local i 00:15:05.543 14:10:08 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:05.543 14:10:08 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:05.543 14:10:08 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:05.543 14:10:08 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:15:05.801 [ 00:15:05.801 { 00:15:05.801 "name": "ftl0", 00:15:05.801 "aliases": [ 00:15:05.801 "99079fe7-7f50-480c-b958-e070120912f8" 00:15:05.801 ], 00:15:05.801 "product_name": "FTL disk", 00:15:05.801 "block_size": 4096, 00:15:05.801 "num_blocks": 20971520, 00:15:05.801 "uuid": "99079fe7-7f50-480c-b958-e070120912f8", 00:15:05.801 "assigned_rate_limits": { 00:15:05.801 "rw_ios_per_sec": 0, 00:15:05.801 "rw_mbytes_per_sec": 0, 00:15:05.801 "r_mbytes_per_sec": 0, 00:15:05.801 "w_mbytes_per_sec": 0 00:15:05.801 }, 00:15:05.801 "claimed": false, 00:15:05.801 "zoned": false, 00:15:05.801 "supported_io_types": { 00:15:05.801 "read": true, 00:15:05.801 "write": true, 00:15:05.801 "unmap": true, 00:15:05.801 "write_zeroes": true, 00:15:05.801 "flush": true, 00:15:05.801 "reset": false, 00:15:05.801 "compare": false, 00:15:05.801 "compare_and_write": false, 00:15:05.801 "abort": false, 00:15:05.801 "nvme_admin": false, 00:15:05.801 "nvme_io": false 00:15:05.801 }, 00:15:05.801 "driver_specific": { 00:15:05.801 "ftl": { 00:15:05.801 "base_bdev": "c8f845bc-cca4-452c-bb05-1e73c034115c", 00:15:05.801 "cache": "nvc0n1p0" 00:15:05.801 } 00:15:05.801 } 00:15:05.801 } 00:15:05.801 ] 00:15:05.801 14:10:08 -- common/autotest_common.sh@905 -- # return 0 00:15:05.801 14:10:08 -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:15:05.801 14:10:08 -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:15:05.801 14:10:08 -- ftl/fio.sh@70 -- # echo ']}' 00:15:05.801 14:10:08 -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:15:06.060 [2024-12-08 14:10:08.849580] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:06.060 [2024-12-08 14:10:08.849614] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:06.060 [2024-12-08 14:10:08.849624] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:15:06.060 [2024-12-08 14:10:08.849632] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:06.060 [2024-12-08 14:10:08.849657] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:06.060 [2024-12-08 14:10:08.851767] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:06.060 [2024-12-08 14:10:08.851878] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:06.060 [2024-12-08 14:10:08.851898] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.096 ms 00:15:06.060 [2024-12-08 14:10:08.851905] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:06.060 [2024-12-08 14:10:08.852310] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:06.060 [2024-12-08 14:10:08.852323] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:06.060 [2024-12-08 14:10:08.852332] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.377 ms 00:15:06.060 [2024-12-08 14:10:08.852339] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:06.060 [2024-12-08 14:10:08.854813] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:06.060 [2024-12-08 14:10:08.854831] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:06.060 [2024-12-08 14:10:08.854841] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.457 ms 00:15:06.060 [2024-12-08 14:10:08.854850] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:06.060 [2024-12-08 14:10:08.859657] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:06.060 [2024-12-08 14:10:08.859702] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:15:06.060 [2024-12-08 14:10:08.859726] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.782 ms 00:15:06.060 [2024-12-08 14:10:08.859744] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:06.060 [2024-12-08 14:10:08.877578] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:06.060 [2024-12-08 14:10:08.877671] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:06.060 [2024-12-08 14:10:08.877687] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.745 ms 00:15:06.060 [2024-12-08 14:10:08.877692] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:06.060 [2024-12-08 14:10:08.890504] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:06.060 [2024-12-08 14:10:08.890597] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:06.060 [2024-12-08 14:10:08.890657] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.780 ms 00:15:06.060 [2024-12-08 14:10:08.890676] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:06.060 [2024-12-08 14:10:08.890827] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:06.060 [2024-12-08 14:10:08.890903] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:06.060 [2024-12-08 14:10:08.890921] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:15:06.060 [2024-12-08 14:10:08.890971] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:06.060 [2024-12-08 14:10:08.908915] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:06.060 [2024-12-08 14:10:08.909016] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:15:06.060 [2024-12-08 14:10:08.909066] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.901 ms 00:15:06.060 [2024-12-08 14:10:08.909084] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:06.060 [2024-12-08 14:10:08.926887] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:06.060 [2024-12-08 14:10:08.926971] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:15:06.061 [2024-12-08 14:10:08.927023] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.739 ms 00:15:06.061 [2024-12-08 14:10:08.927042] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:06.061 [2024-12-08 14:10:08.944478] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:06.061 [2024-12-08 14:10:08.944563] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:06.061 [2024-12-08 14:10:08.944610] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.396 ms 00:15:06.061 [2024-12-08 14:10:08.944628] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:06.061 [2024-12-08 14:10:08.961726] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:06.061 [2024-12-08 14:10:08.961809] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:06.061 [2024-12-08 14:10:08.961850] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.018 ms 00:15:06.061 [2024-12-08 14:10:08.961867] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:06.061 [2024-12-08 14:10:08.961904] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:06.061 [2024-12-08 14:10:08.961932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.961964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.962029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.962063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.962100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.962129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.962180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.962212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.962239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.962267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.962294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.962423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.962447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.962475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.962498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.962558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.962620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.962669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.962698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.962725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.962746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.962769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.962821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.962830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.962836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.962843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.962849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.962856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.962862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.962869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.962874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.962883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.962889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.962896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.962901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.962908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.962913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.962920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.962926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.962933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.962939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.962946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.962951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.962958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.962964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.962971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.962977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.962995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.963001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.963012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.963019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.963027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.963033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.963041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.963047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.963055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.963060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.963067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.963073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.963080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.963086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.963093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.963099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.963108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.963114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.963120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.963126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.963133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.963139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.963146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.963152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.963159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.963165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.963173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.963179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.963185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.963191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.963198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.963203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:06.061 [2024-12-08 14:10:08.963211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:06.062 [2024-12-08 14:10:08.963217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:06.062 [2024-12-08 14:10:08.963226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:06.062 [2024-12-08 14:10:08.963233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:06.062 [2024-12-08 14:10:08.963240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:06.062 [2024-12-08 14:10:08.963246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:06.062 [2024-12-08 14:10:08.963253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:06.062 [2024-12-08 14:10:08.963258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:06.062 [2024-12-08 14:10:08.963266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:06.062 [2024-12-08 14:10:08.963271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:06.062 [2024-12-08 14:10:08.963290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:06.062 [2024-12-08 14:10:08.963295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:06.062 [2024-12-08 14:10:08.963302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:06.062 [2024-12-08 14:10:08.963308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:06.062 [2024-12-08 14:10:08.963315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:06.062 [2024-12-08 14:10:08.963320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:06.062 [2024-12-08 14:10:08.963329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:06.062 [2024-12-08 14:10:08.963335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:06.062 [2024-12-08 14:10:08.963343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:06.062 [2024-12-08 14:10:08.963348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:06.062 [2024-12-08 14:10:08.963355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:06.062 [2024-12-08 14:10:08.963368] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:06.062 [2024-12-08 14:10:08.963375] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 99079fe7-7f50-480c-b958-e070120912f8 00:15:06.062 [2024-12-08 14:10:08.963381] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:06.062 [2024-12-08 14:10:08.963388] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:06.062 [2024-12-08 14:10:08.963394] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:06.062 [2024-12-08 14:10:08.963401] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:06.062 [2024-12-08 14:10:08.963406] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:06.062 [2024-12-08 14:10:08.963413] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:06.062 [2024-12-08 14:10:08.963420] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:06.062 [2024-12-08 14:10:08.963427] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:06.062 [2024-12-08 14:10:08.963431] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:06.062 [2024-12-08 14:10:08.963440] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:06.062 [2024-12-08 14:10:08.963446] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:06.062 [2024-12-08 14:10:08.963456] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.537 ms 00:15:06.062 [2024-12-08 14:10:08.963464] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:06.062 [2024-12-08 14:10:08.973768] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:06.062 [2024-12-08 14:10:08.973848] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:06.062 [2024-12-08 14:10:08.973940] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.263 ms 00:15:06.062 [2024-12-08 14:10:08.973959] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:06.062 [2024-12-08 14:10:08.974142] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:06.062 [2024-12-08 14:10:08.974167] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:06.062 [2024-12-08 14:10:08.974216] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:15:06.062 [2024-12-08 14:10:08.974238] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:06.319 [2024-12-08 14:10:09.010788] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:06.319 [2024-12-08 14:10:09.010878] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:06.320 [2024-12-08 14:10:09.010921] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:06.320 [2024-12-08 14:10:09.010942] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:06.320 [2024-12-08 14:10:09.011015] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:06.320 [2024-12-08 14:10:09.011044] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:06.320 [2024-12-08 14:10:09.011064] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:06.320 [2024-12-08 14:10:09.011078] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:06.320 [2024-12-08 14:10:09.011200] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:06.320 [2024-12-08 14:10:09.011227] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:06.320 [2024-12-08 14:10:09.011250] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:06.320 [2024-12-08 14:10:09.011267] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:06.320 [2024-12-08 14:10:09.011340] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:06.320 [2024-12-08 14:10:09.011363] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:06.320 [2024-12-08 14:10:09.011381] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:06.320 [2024-12-08 14:10:09.011396] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:06.320 [2024-12-08 14:10:09.081135] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:06.320 [2024-12-08 14:10:09.081276] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:06.320 [2024-12-08 14:10:09.081328] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:06.320 [2024-12-08 14:10:09.081353] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:06.320 [2024-12-08 14:10:09.105316] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:06.320 [2024-12-08 14:10:09.105411] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:06.320 [2024-12-08 14:10:09.105455] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:06.320 [2024-12-08 14:10:09.105475] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:06.320 [2024-12-08 14:10:09.105546] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:06.320 [2024-12-08 14:10:09.105570] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:06.320 [2024-12-08 14:10:09.105589] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:06.320 [2024-12-08 14:10:09.105608] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:06.320 [2024-12-08 14:10:09.105772] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:06.320 [2024-12-08 14:10:09.105801] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:06.320 [2024-12-08 14:10:09.105820] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:06.320 [2024-12-08 14:10:09.105836] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:06.320 [2024-12-08 14:10:09.105933] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:06.320 [2024-12-08 14:10:09.105957] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:06.320 [2024-12-08 14:10:09.106047] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:06.320 [2024-12-08 14:10:09.106067] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:06.320 [2024-12-08 14:10:09.106123] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:06.320 [2024-12-08 14:10:09.106146] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:06.320 [2024-12-08 14:10:09.106258] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:06.320 [2024-12-08 14:10:09.106276] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:06.320 [2024-12-08 14:10:09.106385] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:06.320 [2024-12-08 14:10:09.106409] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:06.320 [2024-12-08 14:10:09.106427] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:06.320 [2024-12-08 14:10:09.106482] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:06.320 [2024-12-08 14:10:09.106542] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:06.320 [2024-12-08 14:10:09.106567] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:06.320 [2024-12-08 14:10:09.106586] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:06.320 [2024-12-08 14:10:09.106635] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:06.320 [2024-12-08 14:10:09.106803] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 257.180 ms, result 0 00:15:06.320 true 00:15:06.320 14:10:09 -- ftl/fio.sh@75 -- # killprocess 70691 00:15:06.320 14:10:09 -- common/autotest_common.sh@936 -- # '[' -z 70691 ']' 00:15:06.320 14:10:09 -- common/autotest_common.sh@940 -- # kill -0 70691 00:15:06.320 14:10:09 -- common/autotest_common.sh@941 -- # uname 00:15:06.320 14:10:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:06.320 14:10:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70691 00:15:06.320 14:10:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:06.320 14:10:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:06.320 14:10:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70691' 00:15:06.320 killing process with pid 70691 00:15:06.320 14:10:09 -- common/autotest_common.sh@955 -- # kill 70691 00:15:06.320 14:10:09 -- common/autotest_common.sh@960 -- # wait 70691 00:15:16.273 14:10:18 -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:15:16.273 14:10:18 -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:16.273 14:10:18 -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:15:16.273 14:10:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:16.273 14:10:18 -- common/autotest_common.sh@10 -- # set +x 00:15:16.273 14:10:18 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:16.273 14:10:18 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:16.273 14:10:18 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:15:16.273 14:10:18 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:16.273 14:10:18 -- common/autotest_common.sh@1328 -- # local sanitizers 00:15:16.273 14:10:18 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:16.273 14:10:18 -- common/autotest_common.sh@1330 -- # shift 00:15:16.273 14:10:18 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:15:16.273 14:10:18 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:16.273 14:10:18 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:16.273 14:10:18 -- common/autotest_common.sh@1334 -- # grep libasan 00:15:16.273 14:10:18 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:16.273 14:10:19 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:16.273 14:10:19 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:16.273 14:10:19 -- common/autotest_common.sh@1336 -- # break 00:15:16.273 14:10:19 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:16.273 14:10:19 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:16.273 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:15:16.273 fio-3.35 00:15:16.273 Starting 1 thread 00:15:22.843 00:15:22.843 test: (groupid=0, jobs=1): err= 0: pid=70931: Sun Dec 8 14:10:24 2024 00:15:22.843 read: IOPS=914, BW=60.7MiB/s (63.7MB/s)(255MiB/4192msec) 00:15:22.843 slat (nsec): min=4132, max=30613, avg=6776.38, stdev=3267.34 00:15:22.843 clat (usec): min=251, max=1395, avg=494.33, stdev=245.74 00:15:22.843 lat (usec): min=256, max=1416, avg=501.11, stdev=248.15 00:15:22.843 clat percentiles (usec): 00:15:22.843 | 1.00th=[ 281], 5.00th=[ 297], 10.00th=[ 314], 20.00th=[ 318], 00:15:22.843 | 30.00th=[ 322], 40.00th=[ 326], 50.00th=[ 343], 60.00th=[ 400], 00:15:22.843 | 70.00th=[ 519], 80.00th=[ 857], 90.00th=[ 906], 95.00th=[ 930], 00:15:22.843 | 99.00th=[ 1037], 99.50th=[ 1090], 99.90th=[ 1237], 99.95th=[ 1369], 00:15:22.843 | 99.99th=[ 1401] 00:15:22.843 write: IOPS=920, BW=61.1MiB/s (64.1MB/s)(256MiB/4189msec); 0 zone resets 00:15:22.843 slat (nsec): min=14552, max=79505, avg=21223.68, stdev=6234.04 00:15:22.843 clat (usec): min=293, max=2554, avg=553.88, stdev=293.45 00:15:22.843 lat (usec): min=307, max=2581, avg=575.10, stdev=297.86 00:15:22.843 clat percentiles (usec): 00:15:22.843 | 1.00th=[ 306], 5.00th=[ 330], 10.00th=[ 338], 20.00th=[ 343], 00:15:22.843 | 30.00th=[ 347], 40.00th=[ 355], 50.00th=[ 379], 60.00th=[ 474], 00:15:22.843 | 70.00th=[ 594], 80.00th=[ 947], 90.00th=[ 979], 95.00th=[ 1029], 00:15:22.843 | 99.00th=[ 1532], 99.50th=[ 1582], 99.90th=[ 2024], 99.95th=[ 2409], 00:15:22.843 | 99.99th=[ 2540] 00:15:22.843 bw ( KiB/s): min=32912, max=89624, per=97.91%, avg=61285.00, stdev=27298.61, samples=8 00:15:22.843 iops : min= 484, max= 1318, avg=901.25, stdev=401.45, samples=8 00:15:22.843 lat (usec) : 500=67.84%, 750=7.26%, 1000=20.59% 00:15:22.843 lat (msec) : 2=4.25%, 4=0.07% 00:15:22.843 cpu : usr=99.21%, sys=0.12%, ctx=7, majf=0, minf=1318 00:15:22.843 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:22.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:22.843 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:22.843 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:22.843 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:22.843 00:15:22.843 Run status group 0 (all jobs): 00:15:22.843 READ: bw=60.7MiB/s (63.7MB/s), 60.7MiB/s-60.7MiB/s (63.7MB/s-63.7MB/s), io=255MiB (267MB), run=4192-4192msec 00:15:22.843 WRITE: bw=61.1MiB/s (64.1MB/s), 61.1MiB/s-61.1MiB/s (64.1MB/s-64.1MB/s), io=256MiB (269MB), run=4189-4189msec 00:15:23.102 ----------------------------------------------------- 00:15:23.102 Suppressions used: 00:15:23.102 count bytes template 00:15:23.102 1 5 /usr/src/fio/parse.c 00:15:23.102 1 8 libtcmalloc_minimal.so 00:15:23.102 1 904 libcrypto.so 00:15:23.102 ----------------------------------------------------- 00:15:23.102 00:15:23.102 14:10:25 -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:15:23.102 14:10:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:23.102 14:10:25 -- common/autotest_common.sh@10 -- # set +x 00:15:23.102 14:10:25 -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:23.102 14:10:25 -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:15:23.102 14:10:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:23.102 14:10:25 -- common/autotest_common.sh@10 -- # set +x 00:15:23.102 14:10:25 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:23.102 14:10:25 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:23.102 14:10:25 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:15:23.102 14:10:25 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:23.102 14:10:25 -- common/autotest_common.sh@1328 -- # local sanitizers 00:15:23.102 14:10:25 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:23.102 14:10:25 -- common/autotest_common.sh@1330 -- # shift 00:15:23.102 14:10:25 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:15:23.102 14:10:25 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:23.102 14:10:25 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:23.102 14:10:25 -- common/autotest_common.sh@1334 -- # grep libasan 00:15:23.102 14:10:25 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:23.102 14:10:25 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:23.102 14:10:25 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:23.102 14:10:25 -- common/autotest_common.sh@1336 -- # break 00:15:23.102 14:10:25 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:23.102 14:10:25 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:23.360 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:23.360 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:23.360 fio-3.35 00:15:23.360 Starting 2 threads 00:15:49.900 00:15:49.900 first_half: (groupid=0, jobs=1): err= 0: pid=71039: Sun Dec 8 14:10:48 2024 00:15:49.900 read: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(255MiB/21412msec) 00:15:49.900 slat (nsec): min=2935, max=18217, avg=4630.54, stdev=986.74 00:15:49.900 clat (usec): min=547, max=318301, avg=32692.60, stdev=18803.00 00:15:49.900 lat (usec): min=554, max=318307, avg=32697.23, stdev=18803.05 00:15:49.900 clat percentiles (msec): 00:15:49.900 | 1.00th=[ 8], 5.00th=[ 24], 10.00th=[ 29], 20.00th=[ 29], 00:15:49.900 | 30.00th=[ 29], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 30], 00:15:49.900 | 70.00th=[ 31], 80.00th=[ 34], 90.00th=[ 37], 95.00th=[ 44], 00:15:49.900 | 99.00th=[ 138], 99.50th=[ 155], 99.90th=[ 205], 99.95th=[ 275], 00:15:49.900 | 99.99th=[ 309] 00:15:49.900 write: IOPS=3195, BW=12.5MiB/s (13.1MB/s)(256MiB/20509msec); 0 zone resets 00:15:49.900 slat (usec): min=3, max=637, avg= 5.84, stdev= 4.22 00:15:49.900 clat (usec): min=359, max=75530, avg=9168.22, stdev=14866.32 00:15:49.900 lat (usec): min=367, max=75536, avg=9174.06, stdev=14866.53 00:15:49.900 clat percentiles (usec): 00:15:49.900 | 1.00th=[ 644], 5.00th=[ 742], 10.00th=[ 848], 20.00th=[ 1483], 00:15:49.900 | 30.00th=[ 3195], 40.00th=[ 4359], 50.00th=[ 4883], 60.00th=[ 5276], 00:15:49.900 | 70.00th=[ 5997], 80.00th=[ 9372], 90.00th=[14353], 95.00th=[57934], 00:15:49.900 | 99.00th=[64226], 99.50th=[68682], 99.90th=[72877], 99.95th=[73925], 00:15:49.900 | 99.99th=[74974] 00:15:49.900 bw ( KiB/s): min= 792, max=41616, per=93.22%, avg=23831.27, stdev=14500.82, samples=22 00:15:49.900 iops : min= 198, max=10404, avg=5957.82, stdev=3625.21, samples=22 00:15:49.900 lat (usec) : 500=0.03%, 750=2.79%, 1000=4.32% 00:15:49.900 lat (msec) : 2=4.06%, 4=6.91%, 10=25.00%, 20=4.62%, 50=46.70% 00:15:49.900 lat (msec) : 100=4.58%, 250=0.95%, 500=0.04% 00:15:49.900 cpu : usr=99.50%, sys=0.11%, ctx=34, majf=0, minf=5594 00:15:49.900 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:49.900 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:49.900 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:49.900 issued rwts: total=65389,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:49.900 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:49.900 second_half: (groupid=0, jobs=1): err= 0: pid=71040: Sun Dec 8 14:10:48 2024 00:15:49.900 read: IOPS=3071, BW=12.0MiB/s (12.6MB/s)(255MiB/21241msec) 00:15:49.900 slat (nsec): min=2970, max=33621, avg=4897.40, stdev=1054.03 00:15:49.900 clat (usec): min=578, max=320715, avg=33198.66, stdev=16459.60 00:15:49.900 lat (usec): min=583, max=320720, avg=33203.56, stdev=16459.70 00:15:49.900 clat percentiles (msec): 00:15:49.900 | 1.00th=[ 7], 5.00th=[ 27], 10.00th=[ 29], 20.00th=[ 29], 00:15:49.900 | 30.00th=[ 29], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 30], 00:15:49.900 | 70.00th=[ 31], 80.00th=[ 34], 90.00th=[ 37], 95.00th=[ 48], 00:15:49.900 | 99.00th=[ 127], 99.50th=[ 142], 99.90th=[ 161], 99.95th=[ 169], 00:15:49.900 | 99.99th=[ 271] 00:15:49.900 write: IOPS=3781, BW=14.8MiB/s (15.5MB/s)(256MiB/17332msec); 0 zone resets 00:15:49.900 slat (usec): min=3, max=795, avg= 6.12, stdev= 5.68 00:15:49.900 clat (usec): min=365, max=74959, avg=8414.14, stdev=14409.91 00:15:49.900 lat (usec): min=375, max=74964, avg=8420.26, stdev=14409.92 00:15:49.900 clat percentiles (usec): 00:15:49.900 | 1.00th=[ 660], 5.00th=[ 758], 10.00th=[ 873], 20.00th=[ 1139], 00:15:49.900 | 30.00th=[ 2343], 40.00th=[ 3261], 50.00th=[ 4359], 60.00th=[ 5211], 00:15:49.900 | 70.00th=[ 6194], 80.00th=[ 9503], 90.00th=[11731], 95.00th=[56886], 00:15:49.900 | 99.00th=[63177], 99.50th=[67634], 99.90th=[72877], 99.95th=[73925], 00:15:49.900 | 99.99th=[73925] 00:15:49.900 bw ( KiB/s): min= 976, max=42440, per=100.00%, avg=27594.11, stdev=14920.73, samples=19 00:15:49.900 iops : min= 244, max=10610, avg=6898.53, stdev=3730.18, samples=19 00:15:49.900 lat (usec) : 500=0.03%, 750=2.34%, 1000=5.22% 00:15:49.900 lat (msec) : 2=6.41%, 4=9.64%, 10=18.22%, 20=5.75%, 50=46.68% 00:15:49.900 lat (msec) : 100=4.81%, 250=0.90%, 500=0.01% 00:15:49.900 cpu : usr=99.40%, sys=0.18%, ctx=33, majf=0, minf=5535 00:15:49.900 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:49.900 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:49.900 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:49.900 issued rwts: total=65251,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:49.900 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:49.900 00:15:49.900 Run status group 0 (all jobs): 00:15:49.900 READ: bw=23.8MiB/s (25.0MB/s), 11.9MiB/s-12.0MiB/s (12.5MB/s-12.6MB/s), io=510MiB (535MB), run=21241-21412msec 00:15:49.900 WRITE: bw=25.0MiB/s (26.2MB/s), 12.5MiB/s-14.8MiB/s (13.1MB/s-15.5MB/s), io=512MiB (537MB), run=17332-20509msec 00:15:49.900 ----------------------------------------------------- 00:15:49.900 Suppressions used: 00:15:49.900 count bytes template 00:15:49.900 2 10 /usr/src/fio/parse.c 00:15:49.900 4 384 /usr/src/fio/iolog.c 00:15:49.900 1 8 libtcmalloc_minimal.so 00:15:49.900 1 904 libcrypto.so 00:15:49.900 ----------------------------------------------------- 00:15:49.900 00:15:49.900 14:10:50 -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:15:49.900 14:10:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:49.900 14:10:50 -- common/autotest_common.sh@10 -- # set +x 00:15:49.900 14:10:50 -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:49.900 14:10:50 -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:15:49.900 14:10:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:49.900 14:10:50 -- common/autotest_common.sh@10 -- # set +x 00:15:49.900 14:10:50 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:49.900 14:10:50 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:49.900 14:10:50 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:15:49.900 14:10:50 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:49.900 14:10:50 -- common/autotest_common.sh@1328 -- # local sanitizers 00:15:49.900 14:10:50 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:49.900 14:10:50 -- common/autotest_common.sh@1330 -- # shift 00:15:49.900 14:10:50 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:15:49.900 14:10:50 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:49.900 14:10:50 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:49.900 14:10:50 -- common/autotest_common.sh@1334 -- # grep libasan 00:15:49.900 14:10:50 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:49.900 14:10:50 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:49.900 14:10:50 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:49.900 14:10:50 -- common/autotest_common.sh@1336 -- # break 00:15:49.900 14:10:50 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:49.900 14:10:50 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:49.900 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:49.900 fio-3.35 00:15:49.900 Starting 1 thread 00:16:02.168 00:16:02.168 test: (groupid=0, jobs=1): err= 0: pid=71325: Sun Dec 8 14:11:05 2024 00:16:02.168 read: IOPS=8463, BW=33.1MiB/s (34.7MB/s)(255MiB/7704msec) 00:16:02.168 slat (nsec): min=2984, max=19218, avg=4558.38, stdev=1027.88 00:16:02.168 clat (usec): min=519, max=27046, avg=15116.15, stdev=1561.71 00:16:02.168 lat (usec): min=523, max=27051, avg=15120.71, stdev=1561.73 00:16:02.168 clat percentiles (usec): 00:16:02.168 | 1.00th=[13566], 5.00th=[13698], 10.00th=[13829], 20.00th=[14222], 00:16:02.168 | 30.00th=[14615], 40.00th=[14746], 50.00th=[14877], 60.00th=[15008], 00:16:02.168 | 70.00th=[15139], 80.00th=[15401], 90.00th=[15926], 95.00th=[17695], 00:16:02.168 | 99.00th=[22414], 99.50th=[23462], 99.90th=[26084], 99.95th=[26608], 00:16:02.168 | 99.99th=[26870] 00:16:02.168 write: IOPS=12.5k, BW=48.9MiB/s (51.3MB/s)(256MiB/5236msec); 0 zone resets 00:16:02.168 slat (usec): min=4, max=219, avg= 6.51, stdev= 2.68 00:16:02.168 clat (usec): min=453, max=52148, avg=10185.56, stdev=11634.23 00:16:02.168 lat (usec): min=458, max=52154, avg=10192.07, stdev=11634.27 00:16:02.168 clat percentiles (usec): 00:16:02.168 | 1.00th=[ 652], 5.00th=[ 865], 10.00th=[ 1012], 20.00th=[ 1156], 00:16:02.168 | 30.00th=[ 1303], 40.00th=[ 1631], 50.00th=[ 5932], 60.00th=[ 7242], 00:16:02.168 | 70.00th=[11863], 80.00th=[17957], 90.00th=[33817], 95.00th=[35390], 00:16:02.168 | 99.00th=[38536], 99.50th=[40109], 99.90th=[46924], 99.95th=[47973], 00:16:02.168 | 99.99th=[50070] 00:16:02.168 bw ( KiB/s): min=22278, max=76208, per=95.18%, avg=47653.64, stdev=15502.74, samples=11 00:16:02.168 iops : min= 5569, max=19052, avg=11913.55, stdev=3875.79, samples=11 00:16:02.168 lat (usec) : 500=0.01%, 750=1.33%, 1000=3.45% 00:16:02.168 lat (msec) : 2=15.82%, 4=0.55%, 10=12.63%, 20=56.06%, 50=10.14% 00:16:02.168 lat (msec) : 100=0.01% 00:16:02.168 cpu : usr=99.30%, sys=0.21%, ctx=18, majf=0, minf=5567 00:16:02.168 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:02.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:02.168 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:02.168 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:02.168 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:02.168 00:16:02.168 Run status group 0 (all jobs): 00:16:02.168 READ: bw=33.1MiB/s (34.7MB/s), 33.1MiB/s-33.1MiB/s (34.7MB/s-34.7MB/s), io=255MiB (267MB), run=7704-7704msec 00:16:02.168 WRITE: bw=48.9MiB/s (51.3MB/s), 48.9MiB/s-48.9MiB/s (51.3MB/s-51.3MB/s), io=256MiB (268MB), run=5236-5236msec 00:16:04.081 ----------------------------------------------------- 00:16:04.081 Suppressions used: 00:16:04.081 count bytes template 00:16:04.081 1 5 /usr/src/fio/parse.c 00:16:04.081 2 192 /usr/src/fio/iolog.c 00:16:04.081 1 8 libtcmalloc_minimal.so 00:16:04.081 1 904 libcrypto.so 00:16:04.081 ----------------------------------------------------- 00:16:04.081 00:16:04.081 14:11:06 -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:16:04.081 14:11:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:04.082 14:11:06 -- common/autotest_common.sh@10 -- # set +x 00:16:04.082 14:11:06 -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:04.082 Remove shared memory files 00:16:04.082 14:11:06 -- ftl/fio.sh@85 -- # remove_shm 00:16:04.082 14:11:06 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:04.082 14:11:06 -- ftl/common.sh@205 -- # rm -f rm -f 00:16:04.082 14:11:06 -- ftl/common.sh@206 -- # rm -f rm -f 00:16:04.082 14:11:06 -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid56140 /dev/shm/spdk_tgt_trace.pid69595 00:16:04.082 14:11:06 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:04.082 14:11:06 -- ftl/common.sh@209 -- # rm -f rm -f 00:16:04.082 ************************************ 00:16:04.082 END TEST ftl_fio_basic 00:16:04.082 ************************************ 00:16:04.082 00:16:04.082 real 1m5.977s 00:16:04.082 user 2m6.018s 00:16:04.082 sys 0m23.059s 00:16:04.082 14:11:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:04.082 14:11:06 -- common/autotest_common.sh@10 -- # set +x 00:16:04.082 14:11:06 -- ftl/ftl.sh@75 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:07.0 0000:00:06.0 00:16:04.082 14:11:06 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:16:04.082 14:11:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:04.082 14:11:06 -- common/autotest_common.sh@10 -- # set +x 00:16:04.082 ************************************ 00:16:04.082 START TEST ftl_bdevperf 00:16:04.082 ************************************ 00:16:04.082 14:11:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:07.0 0000:00:06.0 00:16:04.082 * Looking for test storage... 00:16:04.082 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:04.082 14:11:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:04.082 14:11:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:04.082 14:11:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:04.082 14:11:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:04.082 14:11:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:04.082 14:11:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:04.082 14:11:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:04.082 14:11:06 -- scripts/common.sh@335 -- # IFS=.-: 00:16:04.082 14:11:06 -- scripts/common.sh@335 -- # read -ra ver1 00:16:04.082 14:11:06 -- scripts/common.sh@336 -- # IFS=.-: 00:16:04.082 14:11:06 -- scripts/common.sh@336 -- # read -ra ver2 00:16:04.082 14:11:06 -- scripts/common.sh@337 -- # local 'op=<' 00:16:04.082 14:11:06 -- scripts/common.sh@339 -- # ver1_l=2 00:16:04.082 14:11:06 -- scripts/common.sh@340 -- # ver2_l=1 00:16:04.082 14:11:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:04.082 14:11:06 -- scripts/common.sh@343 -- # case "$op" in 00:16:04.082 14:11:06 -- scripts/common.sh@344 -- # : 1 00:16:04.082 14:11:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:04.082 14:11:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:04.082 14:11:06 -- scripts/common.sh@364 -- # decimal 1 00:16:04.082 14:11:06 -- scripts/common.sh@352 -- # local d=1 00:16:04.082 14:11:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:04.082 14:11:06 -- scripts/common.sh@354 -- # echo 1 00:16:04.343 14:11:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:04.343 14:11:07 -- scripts/common.sh@365 -- # decimal 2 00:16:04.343 14:11:07 -- scripts/common.sh@352 -- # local d=2 00:16:04.343 14:11:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:04.343 14:11:07 -- scripts/common.sh@354 -- # echo 2 00:16:04.343 14:11:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:04.343 14:11:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:04.343 14:11:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:04.343 14:11:07 -- scripts/common.sh@367 -- # return 0 00:16:04.343 14:11:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:04.343 14:11:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:04.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.343 --rc genhtml_branch_coverage=1 00:16:04.343 --rc genhtml_function_coverage=1 00:16:04.343 --rc genhtml_legend=1 00:16:04.343 --rc geninfo_all_blocks=1 00:16:04.343 --rc geninfo_unexecuted_blocks=1 00:16:04.343 00:16:04.343 ' 00:16:04.343 14:11:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:04.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.343 --rc genhtml_branch_coverage=1 00:16:04.343 --rc genhtml_function_coverage=1 00:16:04.343 --rc genhtml_legend=1 00:16:04.343 --rc geninfo_all_blocks=1 00:16:04.343 --rc geninfo_unexecuted_blocks=1 00:16:04.343 00:16:04.343 ' 00:16:04.343 14:11:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:04.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.343 --rc genhtml_branch_coverage=1 00:16:04.343 --rc genhtml_function_coverage=1 00:16:04.343 --rc genhtml_legend=1 00:16:04.343 --rc geninfo_all_blocks=1 00:16:04.343 --rc geninfo_unexecuted_blocks=1 00:16:04.343 00:16:04.343 ' 00:16:04.343 14:11:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:04.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.343 --rc genhtml_branch_coverage=1 00:16:04.343 --rc genhtml_function_coverage=1 00:16:04.343 --rc genhtml_legend=1 00:16:04.343 --rc geninfo_all_blocks=1 00:16:04.343 --rc geninfo_unexecuted_blocks=1 00:16:04.343 00:16:04.343 ' 00:16:04.343 14:11:07 -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:04.343 14:11:07 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:16:04.343 14:11:07 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:04.343 14:11:07 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:04.343 14:11:07 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:04.343 14:11:07 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:04.343 14:11:07 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:04.343 14:11:07 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:04.343 14:11:07 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:04.343 14:11:07 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:04.343 14:11:07 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:04.343 14:11:07 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:04.343 14:11:07 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:04.343 14:11:07 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:04.343 14:11:07 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:04.343 14:11:07 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:04.343 14:11:07 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:04.343 14:11:07 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:04.344 14:11:07 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:04.344 14:11:07 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:04.344 14:11:07 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:04.344 14:11:07 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:04.344 14:11:07 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:04.344 14:11:07 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:04.344 14:11:07 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:04.344 14:11:07 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:04.344 14:11:07 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:04.344 14:11:07 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:04.344 14:11:07 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:04.344 14:11:07 -- ftl/bdevperf.sh@11 -- # device=0000:00:07.0 00:16:04.344 14:11:07 -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:06.0 00:16:04.344 14:11:07 -- ftl/bdevperf.sh@13 -- # use_append= 00:16:04.344 14:11:07 -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:04.344 14:11:07 -- ftl/bdevperf.sh@15 -- # timeout=240 00:16:04.344 14:11:07 -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:16:04.344 14:11:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:04.344 14:11:07 -- common/autotest_common.sh@10 -- # set +x 00:16:04.344 14:11:07 -- ftl/bdevperf.sh@19 -- # bdevperf_pid=71564 00:16:04.344 14:11:07 -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:16:04.344 14:11:07 -- ftl/bdevperf.sh@22 -- # waitforlisten 71564 00:16:04.344 14:11:07 -- common/autotest_common.sh@829 -- # '[' -z 71564 ']' 00:16:04.344 14:11:07 -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:16:04.344 14:11:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:04.344 14:11:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:04.344 14:11:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:04.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:04.344 14:11:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:04.344 14:11:07 -- common/autotest_common.sh@10 -- # set +x 00:16:04.344 [2024-12-08 14:11:07.103244] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:04.344 [2024-12-08 14:11:07.103599] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71564 ] 00:16:04.344 [2024-12-08 14:11:07.254291] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:04.606 [2024-12-08 14:11:07.464779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:05.178 14:11:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:05.178 14:11:07 -- common/autotest_common.sh@862 -- # return 0 00:16:05.178 14:11:07 -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:16:05.178 14:11:07 -- ftl/common.sh@54 -- # local name=nvme0 00:16:05.178 14:11:07 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:16:05.178 14:11:07 -- ftl/common.sh@56 -- # local size=103424 00:16:05.178 14:11:07 -- ftl/common.sh@59 -- # local base_bdev 00:16:05.178 14:11:07 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:16:05.439 14:11:08 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:05.439 14:11:08 -- ftl/common.sh@62 -- # local base_size 00:16:05.439 14:11:08 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:05.439 14:11:08 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:16:05.439 14:11:08 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:05.439 14:11:08 -- common/autotest_common.sh@1369 -- # local bs 00:16:05.439 14:11:08 -- common/autotest_common.sh@1370 -- # local nb 00:16:05.439 14:11:08 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:05.699 14:11:08 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:05.699 { 00:16:05.699 "name": "nvme0n1", 00:16:05.699 "aliases": [ 00:16:05.699 "9a7b1b03-0fb0-49ba-b93c-0668e8cc7de6" 00:16:05.699 ], 00:16:05.699 "product_name": "NVMe disk", 00:16:05.699 "block_size": 4096, 00:16:05.699 "num_blocks": 1310720, 00:16:05.699 "uuid": "9a7b1b03-0fb0-49ba-b93c-0668e8cc7de6", 00:16:05.699 "assigned_rate_limits": { 00:16:05.699 "rw_ios_per_sec": 0, 00:16:05.699 "rw_mbytes_per_sec": 0, 00:16:05.699 "r_mbytes_per_sec": 0, 00:16:05.699 "w_mbytes_per_sec": 0 00:16:05.699 }, 00:16:05.699 "claimed": true, 00:16:05.699 "claim_type": "read_many_write_one", 00:16:05.699 "zoned": false, 00:16:05.699 "supported_io_types": { 00:16:05.699 "read": true, 00:16:05.699 "write": true, 00:16:05.699 "unmap": true, 00:16:05.699 "write_zeroes": true, 00:16:05.699 "flush": true, 00:16:05.699 "reset": true, 00:16:05.699 "compare": true, 00:16:05.699 "compare_and_write": false, 00:16:05.699 "abort": true, 00:16:05.699 "nvme_admin": true, 00:16:05.699 "nvme_io": true 00:16:05.699 }, 00:16:05.699 "driver_specific": { 00:16:05.699 "nvme": [ 00:16:05.699 { 00:16:05.699 "pci_address": "0000:00:07.0", 00:16:05.699 "trid": { 00:16:05.699 "trtype": "PCIe", 00:16:05.699 "traddr": "0000:00:07.0" 00:16:05.699 }, 00:16:05.699 "ctrlr_data": { 00:16:05.699 "cntlid": 0, 00:16:05.699 "vendor_id": "0x1b36", 00:16:05.699 "model_number": "QEMU NVMe Ctrl", 00:16:05.699 "serial_number": "12341", 00:16:05.699 "firmware_revision": "8.0.0", 00:16:05.700 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:05.700 "oacs": { 00:16:05.700 "security": 0, 00:16:05.700 "format": 1, 00:16:05.700 "firmware": 0, 00:16:05.700 "ns_manage": 1 00:16:05.700 }, 00:16:05.700 "multi_ctrlr": false, 00:16:05.700 "ana_reporting": false 00:16:05.700 }, 00:16:05.700 "vs": { 00:16:05.700 "nvme_version": "1.4" 00:16:05.700 }, 00:16:05.700 "ns_data": { 00:16:05.700 "id": 1, 00:16:05.700 "can_share": false 00:16:05.700 } 00:16:05.700 } 00:16:05.700 ], 00:16:05.700 "mp_policy": "active_passive" 00:16:05.700 } 00:16:05.700 } 00:16:05.700 ]' 00:16:05.700 14:11:08 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:05.700 14:11:08 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:05.700 14:11:08 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:05.700 14:11:08 -- common/autotest_common.sh@1373 -- # nb=1310720 00:16:05.700 14:11:08 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:16:05.700 14:11:08 -- common/autotest_common.sh@1377 -- # echo 5120 00:16:05.700 14:11:08 -- ftl/common.sh@63 -- # base_size=5120 00:16:05.700 14:11:08 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:05.700 14:11:08 -- ftl/common.sh@67 -- # clear_lvols 00:16:05.700 14:11:08 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:05.700 14:11:08 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:05.960 14:11:08 -- ftl/common.sh@28 -- # stores=50b5d0bd-b645-4ade-ab97-4200d287fc08 00:16:05.960 14:11:08 -- ftl/common.sh@29 -- # for lvs in $stores 00:16:05.960 14:11:08 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 50b5d0bd-b645-4ade-ab97-4200d287fc08 00:16:06.221 14:11:08 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:06.480 14:11:09 -- ftl/common.sh@68 -- # lvs=f1169c89-b298-4c12-aea3-423fd16050f1 00:16:06.480 14:11:09 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u f1169c89-b298-4c12-aea3-423fd16050f1 00:16:06.480 14:11:09 -- ftl/bdevperf.sh@23 -- # split_bdev=d19fb851-020b-4222-99f8-767f84df43bf 00:16:06.480 14:11:09 -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:06.0 d19fb851-020b-4222-99f8-767f84df43bf 00:16:06.480 14:11:09 -- ftl/common.sh@35 -- # local name=nvc0 00:16:06.480 14:11:09 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:16:06.480 14:11:09 -- ftl/common.sh@37 -- # local base_bdev=d19fb851-020b-4222-99f8-767f84df43bf 00:16:06.480 14:11:09 -- ftl/common.sh@38 -- # local cache_size= 00:16:06.480 14:11:09 -- ftl/common.sh@41 -- # get_bdev_size d19fb851-020b-4222-99f8-767f84df43bf 00:16:06.480 14:11:09 -- common/autotest_common.sh@1367 -- # local bdev_name=d19fb851-020b-4222-99f8-767f84df43bf 00:16:06.480 14:11:09 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:06.480 14:11:09 -- common/autotest_common.sh@1369 -- # local bs 00:16:06.480 14:11:09 -- common/autotest_common.sh@1370 -- # local nb 00:16:06.480 14:11:09 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d19fb851-020b-4222-99f8-767f84df43bf 00:16:06.739 14:11:09 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:06.739 { 00:16:06.739 "name": "d19fb851-020b-4222-99f8-767f84df43bf", 00:16:06.739 "aliases": [ 00:16:06.739 "lvs/nvme0n1p0" 00:16:06.739 ], 00:16:06.739 "product_name": "Logical Volume", 00:16:06.739 "block_size": 4096, 00:16:06.739 "num_blocks": 26476544, 00:16:06.739 "uuid": "d19fb851-020b-4222-99f8-767f84df43bf", 00:16:06.739 "assigned_rate_limits": { 00:16:06.739 "rw_ios_per_sec": 0, 00:16:06.739 "rw_mbytes_per_sec": 0, 00:16:06.739 "r_mbytes_per_sec": 0, 00:16:06.739 "w_mbytes_per_sec": 0 00:16:06.739 }, 00:16:06.739 "claimed": false, 00:16:06.739 "zoned": false, 00:16:06.739 "supported_io_types": { 00:16:06.739 "read": true, 00:16:06.739 "write": true, 00:16:06.739 "unmap": true, 00:16:06.739 "write_zeroes": true, 00:16:06.739 "flush": false, 00:16:06.739 "reset": true, 00:16:06.739 "compare": false, 00:16:06.739 "compare_and_write": false, 00:16:06.739 "abort": false, 00:16:06.739 "nvme_admin": false, 00:16:06.739 "nvme_io": false 00:16:06.739 }, 00:16:06.739 "driver_specific": { 00:16:06.739 "lvol": { 00:16:06.739 "lvol_store_uuid": "f1169c89-b298-4c12-aea3-423fd16050f1", 00:16:06.739 "base_bdev": "nvme0n1", 00:16:06.739 "thin_provision": true, 00:16:06.739 "snapshot": false, 00:16:06.739 "clone": false, 00:16:06.739 "esnap_clone": false 00:16:06.739 } 00:16:06.739 } 00:16:06.739 } 00:16:06.739 ]' 00:16:06.739 14:11:09 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:06.739 14:11:09 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:06.739 14:11:09 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:06.739 14:11:09 -- common/autotest_common.sh@1373 -- # nb=26476544 00:16:06.739 14:11:09 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:16:06.739 14:11:09 -- common/autotest_common.sh@1377 -- # echo 103424 00:16:06.739 14:11:09 -- ftl/common.sh@41 -- # local base_size=5171 00:16:06.739 14:11:09 -- ftl/common.sh@44 -- # local nvc_bdev 00:16:06.739 14:11:09 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:16:06.999 14:11:09 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:06.999 14:11:09 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:06.999 14:11:09 -- ftl/common.sh@48 -- # get_bdev_size d19fb851-020b-4222-99f8-767f84df43bf 00:16:06.999 14:11:09 -- common/autotest_common.sh@1367 -- # local bdev_name=d19fb851-020b-4222-99f8-767f84df43bf 00:16:06.999 14:11:09 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:06.999 14:11:09 -- common/autotest_common.sh@1369 -- # local bs 00:16:06.999 14:11:09 -- common/autotest_common.sh@1370 -- # local nb 00:16:06.999 14:11:09 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d19fb851-020b-4222-99f8-767f84df43bf 00:16:07.258 14:11:10 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:07.258 { 00:16:07.258 "name": "d19fb851-020b-4222-99f8-767f84df43bf", 00:16:07.258 "aliases": [ 00:16:07.258 "lvs/nvme0n1p0" 00:16:07.258 ], 00:16:07.258 "product_name": "Logical Volume", 00:16:07.258 "block_size": 4096, 00:16:07.258 "num_blocks": 26476544, 00:16:07.258 "uuid": "d19fb851-020b-4222-99f8-767f84df43bf", 00:16:07.258 "assigned_rate_limits": { 00:16:07.258 "rw_ios_per_sec": 0, 00:16:07.258 "rw_mbytes_per_sec": 0, 00:16:07.258 "r_mbytes_per_sec": 0, 00:16:07.258 "w_mbytes_per_sec": 0 00:16:07.258 }, 00:16:07.258 "claimed": false, 00:16:07.258 "zoned": false, 00:16:07.258 "supported_io_types": { 00:16:07.258 "read": true, 00:16:07.258 "write": true, 00:16:07.258 "unmap": true, 00:16:07.258 "write_zeroes": true, 00:16:07.258 "flush": false, 00:16:07.258 "reset": true, 00:16:07.258 "compare": false, 00:16:07.258 "compare_and_write": false, 00:16:07.258 "abort": false, 00:16:07.258 "nvme_admin": false, 00:16:07.258 "nvme_io": false 00:16:07.258 }, 00:16:07.258 "driver_specific": { 00:16:07.258 "lvol": { 00:16:07.258 "lvol_store_uuid": "f1169c89-b298-4c12-aea3-423fd16050f1", 00:16:07.258 "base_bdev": "nvme0n1", 00:16:07.258 "thin_provision": true, 00:16:07.258 "snapshot": false, 00:16:07.258 "clone": false, 00:16:07.258 "esnap_clone": false 00:16:07.258 } 00:16:07.258 } 00:16:07.258 } 00:16:07.258 ]' 00:16:07.258 14:11:10 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:07.258 14:11:10 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:07.258 14:11:10 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:07.259 14:11:10 -- common/autotest_common.sh@1373 -- # nb=26476544 00:16:07.259 14:11:10 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:16:07.259 14:11:10 -- common/autotest_common.sh@1377 -- # echo 103424 00:16:07.259 14:11:10 -- ftl/common.sh@48 -- # cache_size=5171 00:16:07.259 14:11:10 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:07.517 14:11:10 -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:16:07.517 14:11:10 -- ftl/bdevperf.sh@26 -- # get_bdev_size d19fb851-020b-4222-99f8-767f84df43bf 00:16:07.517 14:11:10 -- common/autotest_common.sh@1367 -- # local bdev_name=d19fb851-020b-4222-99f8-767f84df43bf 00:16:07.517 14:11:10 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:07.517 14:11:10 -- common/autotest_common.sh@1369 -- # local bs 00:16:07.517 14:11:10 -- common/autotest_common.sh@1370 -- # local nb 00:16:07.517 14:11:10 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d19fb851-020b-4222-99f8-767f84df43bf 00:16:07.517 14:11:10 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:07.517 { 00:16:07.517 "name": "d19fb851-020b-4222-99f8-767f84df43bf", 00:16:07.517 "aliases": [ 00:16:07.517 "lvs/nvme0n1p0" 00:16:07.517 ], 00:16:07.517 "product_name": "Logical Volume", 00:16:07.517 "block_size": 4096, 00:16:07.517 "num_blocks": 26476544, 00:16:07.517 "uuid": "d19fb851-020b-4222-99f8-767f84df43bf", 00:16:07.517 "assigned_rate_limits": { 00:16:07.517 "rw_ios_per_sec": 0, 00:16:07.517 "rw_mbytes_per_sec": 0, 00:16:07.517 "r_mbytes_per_sec": 0, 00:16:07.517 "w_mbytes_per_sec": 0 00:16:07.517 }, 00:16:07.517 "claimed": false, 00:16:07.517 "zoned": false, 00:16:07.517 "supported_io_types": { 00:16:07.517 "read": true, 00:16:07.517 "write": true, 00:16:07.517 "unmap": true, 00:16:07.517 "write_zeroes": true, 00:16:07.517 "flush": false, 00:16:07.517 "reset": true, 00:16:07.517 "compare": false, 00:16:07.517 "compare_and_write": false, 00:16:07.517 "abort": false, 00:16:07.517 "nvme_admin": false, 00:16:07.517 "nvme_io": false 00:16:07.517 }, 00:16:07.517 "driver_specific": { 00:16:07.517 "lvol": { 00:16:07.517 "lvol_store_uuid": "f1169c89-b298-4c12-aea3-423fd16050f1", 00:16:07.517 "base_bdev": "nvme0n1", 00:16:07.517 "thin_provision": true, 00:16:07.517 "snapshot": false, 00:16:07.517 "clone": false, 00:16:07.517 "esnap_clone": false 00:16:07.517 } 00:16:07.517 } 00:16:07.517 } 00:16:07.517 ]' 00:16:07.517 14:11:10 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:07.777 14:11:10 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:07.777 14:11:10 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:07.777 14:11:10 -- common/autotest_common.sh@1373 -- # nb=26476544 00:16:07.777 14:11:10 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:16:07.777 14:11:10 -- common/autotest_common.sh@1377 -- # echo 103424 00:16:07.777 14:11:10 -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:16:07.777 14:11:10 -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d d19fb851-020b-4222-99f8-767f84df43bf -c nvc0n1p0 --l2p_dram_limit 20 00:16:07.777 [2024-12-08 14:11:10.626222] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.777 [2024-12-08 14:11:10.626261] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:07.777 [2024-12-08 14:11:10.626274] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:07.777 [2024-12-08 14:11:10.626280] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.777 [2024-12-08 14:11:10.626322] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.777 [2024-12-08 14:11:10.626329] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:07.777 [2024-12-08 14:11:10.626337] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:16:07.777 [2024-12-08 14:11:10.626343] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.777 [2024-12-08 14:11:10.626357] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:07.777 [2024-12-08 14:11:10.626947] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:07.777 [2024-12-08 14:11:10.626963] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.777 [2024-12-08 14:11:10.626969] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:07.777 [2024-12-08 14:11:10.626977] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.607 ms 00:16:07.777 [2024-12-08 14:11:10.626999] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.777 [2024-12-08 14:11:10.627022] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 36e08226-0cbe-4915-b915-0f32053f1f2c 00:16:07.777 [2024-12-08 14:11:10.628002] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.778 [2024-12-08 14:11:10.628031] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:07.778 [2024-12-08 14:11:10.628040] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:16:07.778 [2024-12-08 14:11:10.628047] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.778 [2024-12-08 14:11:10.632963] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.778 [2024-12-08 14:11:10.633007] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:07.778 [2024-12-08 14:11:10.633015] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.864 ms 00:16:07.778 [2024-12-08 14:11:10.633022] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.778 [2024-12-08 14:11:10.633086] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.778 [2024-12-08 14:11:10.633095] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:07.778 [2024-12-08 14:11:10.633102] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:16:07.778 [2024-12-08 14:11:10.633111] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.778 [2024-12-08 14:11:10.633149] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.778 [2024-12-08 14:11:10.633158] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:07.778 [2024-12-08 14:11:10.633166] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:07.778 [2024-12-08 14:11:10.633173] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.778 [2024-12-08 14:11:10.633189] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:07.778 [2024-12-08 14:11:10.636271] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.778 [2024-12-08 14:11:10.636364] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:07.778 [2024-12-08 14:11:10.636414] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.084 ms 00:16:07.778 [2024-12-08 14:11:10.636433] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.778 [2024-12-08 14:11:10.636472] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.778 [2024-12-08 14:11:10.636516] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:07.778 [2024-12-08 14:11:10.636536] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:07.778 [2024-12-08 14:11:10.636552] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.778 [2024-12-08 14:11:10.636602] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:07.778 [2024-12-08 14:11:10.636707] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:07.778 [2024-12-08 14:11:10.636866] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:07.778 [2024-12-08 14:11:10.636893] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:07.778 [2024-12-08 14:11:10.636920] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:07.778 [2024-12-08 14:11:10.636943] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:07.778 [2024-12-08 14:11:10.636967] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:07.778 [2024-12-08 14:11:10.637048] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:07.778 [2024-12-08 14:11:10.637074] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:07.778 [2024-12-08 14:11:10.637089] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:07.778 [2024-12-08 14:11:10.637105] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.778 [2024-12-08 14:11:10.637121] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:07.778 [2024-12-08 14:11:10.637137] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.505 ms 00:16:07.778 [2024-12-08 14:11:10.637152] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.778 [2024-12-08 14:11:10.637213] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.778 [2024-12-08 14:11:10.637293] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:07.778 [2024-12-08 14:11:10.637310] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:16:07.778 [2024-12-08 14:11:10.637325] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.778 [2024-12-08 14:11:10.637402] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:07.778 [2024-12-08 14:11:10.637421] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:07.778 [2024-12-08 14:11:10.637438] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:07.778 [2024-12-08 14:11:10.637518] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:07.778 [2024-12-08 14:11:10.637534] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:07.778 [2024-12-08 14:11:10.637548] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:07.778 [2024-12-08 14:11:10.637563] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:07.778 [2024-12-08 14:11:10.637577] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:07.778 [2024-12-08 14:11:10.637631] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:07.778 [2024-12-08 14:11:10.637650] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:07.778 [2024-12-08 14:11:10.637667] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:07.778 [2024-12-08 14:11:10.637681] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:07.778 [2024-12-08 14:11:10.637697] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:07.778 [2024-12-08 14:11:10.637743] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:07.778 [2024-12-08 14:11:10.637762] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:16:07.778 [2024-12-08 14:11:10.637776] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:07.778 [2024-12-08 14:11:10.637793] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:07.778 [2024-12-08 14:11:10.637808] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:16:07.778 [2024-12-08 14:11:10.637823] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:07.778 [2024-12-08 14:11:10.637859] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:07.778 [2024-12-08 14:11:10.637898] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:16:07.778 [2024-12-08 14:11:10.637906] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:07.778 [2024-12-08 14:11:10.637913] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:07.778 [2024-12-08 14:11:10.637919] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:07.778 [2024-12-08 14:11:10.637926] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:07.778 [2024-12-08 14:11:10.637931] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:07.778 [2024-12-08 14:11:10.637937] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:16:07.778 [2024-12-08 14:11:10.637942] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:07.778 [2024-12-08 14:11:10.637949] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:07.778 [2024-12-08 14:11:10.637954] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:07.778 [2024-12-08 14:11:10.637962] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:07.778 [2024-12-08 14:11:10.637967] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:07.778 [2024-12-08 14:11:10.637975] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:16:07.778 [2024-12-08 14:11:10.637996] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:07.778 [2024-12-08 14:11:10.638004] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:07.778 [2024-12-08 14:11:10.638009] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:07.778 [2024-12-08 14:11:10.638018] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:07.778 [2024-12-08 14:11:10.638023] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:07.778 [2024-12-08 14:11:10.638029] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:16:07.778 [2024-12-08 14:11:10.638035] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:07.778 [2024-12-08 14:11:10.638041] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:07.778 [2024-12-08 14:11:10.638047] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:07.778 [2024-12-08 14:11:10.638054] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:07.778 [2024-12-08 14:11:10.638060] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:07.778 [2024-12-08 14:11:10.638067] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:07.778 [2024-12-08 14:11:10.638073] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:07.778 [2024-12-08 14:11:10.638079] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:07.778 [2024-12-08 14:11:10.638085] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:07.778 [2024-12-08 14:11:10.638092] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:07.778 [2024-12-08 14:11:10.638098] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:07.778 [2024-12-08 14:11:10.638105] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:07.778 [2024-12-08 14:11:10.638113] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:07.778 [2024-12-08 14:11:10.638123] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:07.778 [2024-12-08 14:11:10.638128] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:16:07.778 [2024-12-08 14:11:10.638135] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:16:07.778 [2024-12-08 14:11:10.638141] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:16:07.778 [2024-12-08 14:11:10.638148] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:16:07.778 [2024-12-08 14:11:10.638153] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:16:07.778 [2024-12-08 14:11:10.638160] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:16:07.778 [2024-12-08 14:11:10.638165] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:16:07.779 [2024-12-08 14:11:10.638171] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:16:07.779 [2024-12-08 14:11:10.638177] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:16:07.779 [2024-12-08 14:11:10.638184] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:16:07.779 [2024-12-08 14:11:10.638190] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:16:07.779 [2024-12-08 14:11:10.638199] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:16:07.779 [2024-12-08 14:11:10.638204] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:07.779 [2024-12-08 14:11:10.638212] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:07.779 [2024-12-08 14:11:10.638218] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:07.779 [2024-12-08 14:11:10.638224] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:07.779 [2024-12-08 14:11:10.638230] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:07.779 [2024-12-08 14:11:10.638237] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:07.779 [2024-12-08 14:11:10.638243] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.779 [2024-12-08 14:11:10.638251] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:07.779 [2024-12-08 14:11:10.638256] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.877 ms 00:16:07.779 [2024-12-08 14:11:10.638263] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.779 [2024-12-08 14:11:10.650346] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.779 [2024-12-08 14:11:10.650446] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:07.779 [2024-12-08 14:11:10.650490] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.053 ms 00:16:07.779 [2024-12-08 14:11:10.650509] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.779 [2024-12-08 14:11:10.650584] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.779 [2024-12-08 14:11:10.650604] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:07.779 [2024-12-08 14:11:10.650640] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:16:07.779 [2024-12-08 14:11:10.650659] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.779 [2024-12-08 14:11:10.688882] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.779 [2024-12-08 14:11:10.689008] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:07.779 [2024-12-08 14:11:10.689058] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.179 ms 00:16:07.779 [2024-12-08 14:11:10.689080] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.779 [2024-12-08 14:11:10.689117] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.779 [2024-12-08 14:11:10.689139] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:07.779 [2024-12-08 14:11:10.689154] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:16:07.779 [2024-12-08 14:11:10.689170] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.779 [2024-12-08 14:11:10.689528] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.779 [2024-12-08 14:11:10.689572] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:07.779 [2024-12-08 14:11:10.689590] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:16:07.779 [2024-12-08 14:11:10.689606] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.779 [2024-12-08 14:11:10.689745] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.779 [2024-12-08 14:11:10.689773] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:07.779 [2024-12-08 14:11:10.689792] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:16:07.779 [2024-12-08 14:11:10.689839] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:08.040 [2024-12-08 14:11:10.701366] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:08.040 [2024-12-08 14:11:10.701457] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:08.040 [2024-12-08 14:11:10.701500] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.502 ms 00:16:08.040 [2024-12-08 14:11:10.701519] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:08.040 [2024-12-08 14:11:10.710727] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:16:08.040 [2024-12-08 14:11:10.715190] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:08.040 [2024-12-08 14:11:10.715275] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:08.040 [2024-12-08 14:11:10.715316] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.604 ms 00:16:08.040 [2024-12-08 14:11:10.715333] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:08.040 [2024-12-08 14:11:10.782599] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:08.040 [2024-12-08 14:11:10.782757] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:08.040 [2024-12-08 14:11:10.782817] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.234 ms 00:16:08.040 [2024-12-08 14:11:10.782842] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:08.040 [2024-12-08 14:11:10.782890] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:16:08.040 [2024-12-08 14:11:10.782926] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:16:12.246 [2024-12-08 14:11:14.274964] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.246 [2024-12-08 14:11:14.275225] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:12.246 [2024-12-08 14:11:14.275261] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3492.053 ms 00:16:12.246 [2024-12-08 14:11:14.275272] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.246 [2024-12-08 14:11:14.275498] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.246 [2024-12-08 14:11:14.275511] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:12.246 [2024-12-08 14:11:14.275524] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.168 ms 00:16:12.246 [2024-12-08 14:11:14.275533] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.246 [2024-12-08 14:11:14.302201] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.246 [2024-12-08 14:11:14.302255] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:12.246 [2024-12-08 14:11:14.302273] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.606 ms 00:16:12.246 [2024-12-08 14:11:14.302284] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.246 [2024-12-08 14:11:14.327881] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.246 [2024-12-08 14:11:14.327930] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:12.246 [2024-12-08 14:11:14.327950] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.542 ms 00:16:12.246 [2024-12-08 14:11:14.327958] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.246 [2024-12-08 14:11:14.328412] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.246 [2024-12-08 14:11:14.328429] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:12.246 [2024-12-08 14:11:14.328441] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.392 ms 00:16:12.246 [2024-12-08 14:11:14.328448] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.246 [2024-12-08 14:11:14.399520] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.246 [2024-12-08 14:11:14.399575] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:12.246 [2024-12-08 14:11:14.399592] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.018 ms 00:16:12.246 [2024-12-08 14:11:14.399602] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.246 [2024-12-08 14:11:14.427686] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.246 [2024-12-08 14:11:14.427736] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:12.246 [2024-12-08 14:11:14.427752] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.025 ms 00:16:12.246 [2024-12-08 14:11:14.427761] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.246 [2024-12-08 14:11:14.429418] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.246 [2024-12-08 14:11:14.429628] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:12.246 [2024-12-08 14:11:14.429656] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.606 ms 00:16:12.246 [2024-12-08 14:11:14.429667] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.246 [2024-12-08 14:11:14.456270] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.246 [2024-12-08 14:11:14.456459] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:12.246 [2024-12-08 14:11:14.456487] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.539 ms 00:16:12.246 [2024-12-08 14:11:14.456495] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.246 [2024-12-08 14:11:14.456580] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.246 [2024-12-08 14:11:14.456590] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:12.246 [2024-12-08 14:11:14.456605] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:12.246 [2024-12-08 14:11:14.456613] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.246 [2024-12-08 14:11:14.456724] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.246 [2024-12-08 14:11:14.456735] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:12.246 [2024-12-08 14:11:14.456746] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:16:12.246 [2024-12-08 14:11:14.456754] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.246 [2024-12-08 14:11:14.458390] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3831.624 ms, result 0 00:16:12.246 { 00:16:12.246 "name": "ftl0", 00:16:12.246 "uuid": "36e08226-0cbe-4915-b915-0f32053f1f2c" 00:16:12.246 } 00:16:12.246 14:11:14 -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:16:12.246 14:11:14 -- ftl/bdevperf.sh@29 -- # jq -r .name 00:16:12.246 14:11:14 -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:16:12.246 14:11:14 -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:16:12.246 [2024-12-08 14:11:14.790023] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:12.246 I/O size of 69632 is greater than zero copy threshold (65536). 00:16:12.246 Zero copy mechanism will not be used. 00:16:12.246 Running I/O for 4 seconds... 00:16:16.444 00:16:16.444 Latency(us) 00:16:16.444 [2024-12-08T14:11:19.364Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:16.444 [2024-12-08T14:11:19.364Z] Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:16:16.444 ftl0 : 4.00 1486.16 98.69 0.00 0.00 707.43 147.30 1966.08 00:16:16.444 [2024-12-08T14:11:19.364Z] =================================================================================================================== 00:16:16.444 [2024-12-08T14:11:19.364Z] Total : 1486.16 98.69 0.00 0.00 707.43 147.30 1966.08 00:16:16.444 [2024-12-08 14:11:18.801001] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:16.444 0 00:16:16.444 14:11:18 -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:16:16.444 [2024-12-08 14:11:18.921443] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:16.444 Running I/O for 4 seconds... 00:16:20.650 00:16:20.650 Latency(us) 00:16:20.650 [2024-12-08T14:11:23.570Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:20.650 [2024-12-08T14:11:23.570Z] Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:16:20.650 ftl0 : 4.04 4932.23 19.27 0.00 0.00 25828.97 367.06 49000.76 00:16:20.650 [2024-12-08T14:11:23.570Z] =================================================================================================================== 00:16:20.650 [2024-12-08T14:11:23.570Z] Total : 4932.23 19.27 0.00 0.00 25828.97 0.00 49000.76 00:16:20.650 [2024-12-08 14:11:22.968096] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ft0 00:16:20.650 l0 00:16:20.650 14:11:22 -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:16:20.650 [2024-12-08 14:11:23.085055] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:20.650 Running I/O for 4 seconds... 00:16:24.863 00:16:24.863 Latency(us) 00:16:24.863 [2024-12-08T14:11:27.783Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:24.863 [2024-12-08T14:11:27.783Z] Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:24.863 Verification LBA range: start 0x0 length 0x1400000 00:16:24.863 ftl0 : 4.01 8439.83 32.97 0.00 0.00 15128.77 218.98 30247.38 00:16:24.863 [2024-12-08T14:11:27.783Z] =================================================================================================================== 00:16:24.863 [2024-12-08T14:11:27.783Z] Total : 8439.83 32.97 0.00 0.00 15128.77 0.00 30247.38 00:16:24.863 [2024-12-08 14:11:27.110582] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:24.863 0 00:16:24.863 14:11:27 -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:16:24.863 [2024-12-08 14:11:27.317636] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.863 [2024-12-08 14:11:27.317823] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:24.863 [2024-12-08 14:11:27.317905] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:24.863 [2024-12-08 14:11:27.317932] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.863 [2024-12-08 14:11:27.317977] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:24.863 [2024-12-08 14:11:27.320888] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.863 [2024-12-08 14:11:27.321075] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:24.863 [2024-12-08 14:11:27.321232] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.852 ms 00:16:24.863 [2024-12-08 14:11:27.321269] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.863 [2024-12-08 14:11:27.324498] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.863 [2024-12-08 14:11:27.324660] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:24.863 [2024-12-08 14:11:27.324734] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.184 ms 00:16:24.863 [2024-12-08 14:11:27.324762] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.863 [2024-12-08 14:11:27.573956] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.863 [2024-12-08 14:11:27.574129] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:24.863 [2024-12-08 14:11:27.574213] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 249.157 ms 00:16:24.863 [2024-12-08 14:11:27.574241] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.863 [2024-12-08 14:11:27.580388] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.863 [2024-12-08 14:11:27.580511] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:24.863 [2024-12-08 14:11:27.580575] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.073 ms 00:16:24.863 [2024-12-08 14:11:27.580651] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.863 [2024-12-08 14:11:27.604840] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.863 [2024-12-08 14:11:27.604975] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:24.863 [2024-12-08 14:11:27.605003] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.105 ms 00:16:24.863 [2024-12-08 14:11:27.605016] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.863 [2024-12-08 14:11:27.620831] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.863 [2024-12-08 14:11:27.620876] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:24.863 [2024-12-08 14:11:27.620888] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.753 ms 00:16:24.864 [2024-12-08 14:11:27.620898] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.864 [2024-12-08 14:11:27.621058] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.864 [2024-12-08 14:11:27.621073] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:24.864 [2024-12-08 14:11:27.621082] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:16:24.864 [2024-12-08 14:11:27.621092] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.864 [2024-12-08 14:11:27.645129] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.864 [2024-12-08 14:11:27.645171] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:24.864 [2024-12-08 14:11:27.645183] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.023 ms 00:16:24.864 [2024-12-08 14:11:27.645191] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.864 [2024-12-08 14:11:27.668931] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.864 [2024-12-08 14:11:27.668968] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:24.864 [2024-12-08 14:11:27.668979] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.692 ms 00:16:24.864 [2024-12-08 14:11:27.669004] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.864 [2024-12-08 14:11:27.691906] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.864 [2024-12-08 14:11:27.692054] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:24.864 [2024-12-08 14:11:27.692070] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.870 ms 00:16:24.864 [2024-12-08 14:11:27.692078] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.864 [2024-12-08 14:11:27.714885] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.864 [2024-12-08 14:11:27.714919] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:24.864 [2024-12-08 14:11:27.714929] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.748 ms 00:16:24.864 [2024-12-08 14:11:27.714937] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.864 [2024-12-08 14:11:27.714967] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:24.864 [2024-12-08 14:11:27.715005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:24.864 [2024-12-08 14:11:27.715016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:24.864 [2024-12-08 14:11:27.715026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:24.864 [2024-12-08 14:11:27.715033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:24.864 [2024-12-08 14:11:27.715043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:24.864 [2024-12-08 14:11:27.715050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:24.864 [2024-12-08 14:11:27.715062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:24.864 [2024-12-08 14:11:27.715069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:24.864 [2024-12-08 14:11:27.715078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:24.864 [2024-12-08 14:11:27.715085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:24.864 [2024-12-08 14:11:27.715094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:24.864 [2024-12-08 14:11:27.715101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:24.864 [2024-12-08 14:11:27.715127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:24.864 [2024-12-08 14:11:27.715134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:24.864 [2024-12-08 14:11:27.715143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:24.864 [2024-12-08 14:11:27.715151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:24.864 [2024-12-08 14:11:27.715159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:24.864 [2024-12-08 14:11:27.715167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:24.864 [2024-12-08 14:11:27.715176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:24.864 [2024-12-08 14:11:27.715183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:24.864 [2024-12-08 14:11:27.715192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:24.864 [2024-12-08 14:11:27.715199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:24.864 [2024-12-08 14:11:27.715212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:24.864 [2024-12-08 14:11:27.715220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:24.864 [2024-12-08 14:11:27.715229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:24.864 [2024-12-08 14:11:27.715236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:24.864 [2024-12-08 14:11:27.715246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:24.864 [2024-12-08 14:11:27.715254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:24.864 [2024-12-08 14:11:27.715264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:24.864 [2024-12-08 14:11:27.715271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:24.864 [2024-12-08 14:11:27.715280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:24.864 [2024-12-08 14:11:27.715288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:24.864 [2024-12-08 14:11:27.715297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:24.864 [2024-12-08 14:11:27.715304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:24.864 [2024-12-08 14:11:27.715313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:24.864 [2024-12-08 14:11:27.715320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:24.864 [2024-12-08 14:11:27.715329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:24.864 [2024-12-08 14:11:27.715337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:24.864 [2024-12-08 14:11:27.715347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:24.864 [2024-12-08 14:11:27.715355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:24.864 [2024-12-08 14:11:27.715364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:24.864 [2024-12-08 14:11:27.715372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:24.864 [2024-12-08 14:11:27.715381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:24.864 [2024-12-08 14:11:27.715388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:24.864 [2024-12-08 14:11:27.715403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:24.864 [2024-12-08 14:11:27.715410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:24.864 [2024-12-08 14:11:27.715421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:24.864 [2024-12-08 14:11:27.715428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:24.864 [2024-12-08 14:11:27.715437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:24.864 [2024-12-08 14:11:27.715444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:24.864 [2024-12-08 14:11:27.715453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:24.864 [2024-12-08 14:11:27.715460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:24.864 [2024-12-08 14:11:27.715470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:24.865 [2024-12-08 14:11:27.715477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:24.865 [2024-12-08 14:11:27.715488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:24.865 [2024-12-08 14:11:27.715495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:24.865 [2024-12-08 14:11:27.715504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:24.865 [2024-12-08 14:11:27.715511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:24.865 [2024-12-08 14:11:27.715521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:24.865 [2024-12-08 14:11:27.715529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:24.865 [2024-12-08 14:11:27.715538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:24.865 [2024-12-08 14:11:27.715545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:24.865 [2024-12-08 14:11:27.715555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:24.865 [2024-12-08 14:11:27.715562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:24.865 [2024-12-08 14:11:27.715571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:24.865 [2024-12-08 14:11:27.715579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:24.865 [2024-12-08 14:11:27.715587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:24.865 [2024-12-08 14:11:27.715595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:24.865 [2024-12-08 14:11:27.715604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:24.865 [2024-12-08 14:11:27.715611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:24.865 [2024-12-08 14:11:27.715622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:24.865 [2024-12-08 14:11:27.715629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:24.865 [2024-12-08 14:11:27.715640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:24.865 [2024-12-08 14:11:27.715648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:24.865 [2024-12-08 14:11:27.715657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:24.865 [2024-12-08 14:11:27.715665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:24.865 [2024-12-08 14:11:27.715674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:24.865 [2024-12-08 14:11:27.715681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:24.865 [2024-12-08 14:11:27.715690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:24.865 [2024-12-08 14:11:27.715697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:24.865 [2024-12-08 14:11:27.715706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:24.865 [2024-12-08 14:11:27.715714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:24.865 [2024-12-08 14:11:27.715722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:24.865 [2024-12-08 14:11:27.715730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:24.865 [2024-12-08 14:11:27.715739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:24.865 [2024-12-08 14:11:27.715746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:24.865 [2024-12-08 14:11:27.715756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:24.865 [2024-12-08 14:11:27.715763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:24.865 [2024-12-08 14:11:27.715773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:24.865 [2024-12-08 14:11:27.715780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:24.865 [2024-12-08 14:11:27.715794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:24.865 [2024-12-08 14:11:27.715801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:24.865 [2024-12-08 14:11:27.715810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:24.865 [2024-12-08 14:11:27.715817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:24.865 [2024-12-08 14:11:27.715828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:24.865 [2024-12-08 14:11:27.715835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:24.865 [2024-12-08 14:11:27.715844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:24.865 [2024-12-08 14:11:27.715851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:24.865 [2024-12-08 14:11:27.715860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:24.865 [2024-12-08 14:11:27.715867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:24.865 [2024-12-08 14:11:27.715885] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:24.865 [2024-12-08 14:11:27.715892] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 36e08226-0cbe-4915-b915-0f32053f1f2c 00:16:24.865 [2024-12-08 14:11:27.715903] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:24.865 [2024-12-08 14:11:27.715910] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:24.865 [2024-12-08 14:11:27.715918] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:24.865 [2024-12-08 14:11:27.715926] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:24.865 [2024-12-08 14:11:27.715934] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:24.865 [2024-12-08 14:11:27.715943] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:24.865 [2024-12-08 14:11:27.715951] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:24.865 [2024-12-08 14:11:27.715958] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:24.865 [2024-12-08 14:11:27.715966] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:24.865 [2024-12-08 14:11:27.715973] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.865 [2024-12-08 14:11:27.715990] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:24.865 [2024-12-08 14:11:27.715998] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.006 ms 00:16:24.865 [2024-12-08 14:11:27.716006] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.865 [2024-12-08 14:11:27.728481] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.865 [2024-12-08 14:11:27.728515] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:24.865 [2024-12-08 14:11:27.728525] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.448 ms 00:16:24.865 [2024-12-08 14:11:27.728539] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.865 [2024-12-08 14:11:27.728720] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.865 [2024-12-08 14:11:27.728729] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:24.865 [2024-12-08 14:11:27.728737] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.164 ms 00:16:24.865 [2024-12-08 14:11:27.728745] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.865 [2024-12-08 14:11:27.766095] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:24.865 [2024-12-08 14:11:27.766228] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:24.865 [2024-12-08 14:11:27.766245] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:24.865 [2024-12-08 14:11:27.766255] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.865 [2024-12-08 14:11:27.766307] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:24.865 [2024-12-08 14:11:27.766317] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:24.865 [2024-12-08 14:11:27.766325] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:24.866 [2024-12-08 14:11:27.766333] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.866 [2024-12-08 14:11:27.766391] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:24.866 [2024-12-08 14:11:27.766402] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:24.866 [2024-12-08 14:11:27.766410] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:24.866 [2024-12-08 14:11:27.766423] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.866 [2024-12-08 14:11:27.766437] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:24.866 [2024-12-08 14:11:27.766446] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:24.866 [2024-12-08 14:11:27.766453] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:24.866 [2024-12-08 14:11:27.766462] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.127 [2024-12-08 14:11:27.839474] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.127 [2024-12-08 14:11:27.839518] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:25.127 [2024-12-08 14:11:27.839530] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.127 [2024-12-08 14:11:27.839542] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.127 [2024-12-08 14:11:27.868962] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.127 [2024-12-08 14:11:27.869017] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:25.127 [2024-12-08 14:11:27.869027] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.127 [2024-12-08 14:11:27.869037] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.127 [2024-12-08 14:11:27.869100] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.127 [2024-12-08 14:11:27.869112] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:25.127 [2024-12-08 14:11:27.869120] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.127 [2024-12-08 14:11:27.869131] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.127 [2024-12-08 14:11:27.869171] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.127 [2024-12-08 14:11:27.869181] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:25.127 [2024-12-08 14:11:27.869189] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.127 [2024-12-08 14:11:27.869198] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.127 [2024-12-08 14:11:27.869289] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.127 [2024-12-08 14:11:27.869301] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:25.127 [2024-12-08 14:11:27.869308] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.127 [2024-12-08 14:11:27.869318] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.127 [2024-12-08 14:11:27.869344] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.127 [2024-12-08 14:11:27.869356] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:25.127 [2024-12-08 14:11:27.869364] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.127 [2024-12-08 14:11:27.869373] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.127 [2024-12-08 14:11:27.869405] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.127 [2024-12-08 14:11:27.869416] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:25.127 [2024-12-08 14:11:27.869423] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.127 [2024-12-08 14:11:27.869434] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.127 [2024-12-08 14:11:27.869473] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.127 [2024-12-08 14:11:27.869483] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:25.127 [2024-12-08 14:11:27.869491] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.127 [2024-12-08 14:11:27.869500] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.127 [2024-12-08 14:11:27.869616] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 551.947 ms, result 0 00:16:25.127 true 00:16:25.127 14:11:27 -- ftl/bdevperf.sh@37 -- # killprocess 71564 00:16:25.127 14:11:27 -- common/autotest_common.sh@936 -- # '[' -z 71564 ']' 00:16:25.127 14:11:27 -- common/autotest_common.sh@940 -- # kill -0 71564 00:16:25.127 14:11:27 -- common/autotest_common.sh@941 -- # uname 00:16:25.127 14:11:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:25.127 14:11:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71564 00:16:25.127 killing process with pid 71564 00:16:25.127 Received shutdown signal, test time was about 4.000000 seconds 00:16:25.127 00:16:25.127 Latency(us) 00:16:25.127 [2024-12-08T14:11:28.047Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:25.127 [2024-12-08T14:11:28.047Z] =================================================================================================================== 00:16:25.127 [2024-12-08T14:11:28.047Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:25.127 14:11:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:25.127 14:11:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:25.127 14:11:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71564' 00:16:25.128 14:11:27 -- common/autotest_common.sh@955 -- # kill 71564 00:16:25.128 14:11:27 -- common/autotest_common.sh@960 -- # wait 71564 00:16:28.430 14:11:31 -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:16:28.430 14:11:31 -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:16:28.430 14:11:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:28.430 14:11:31 -- common/autotest_common.sh@10 -- # set +x 00:16:28.692 Remove shared memory files 00:16:28.692 14:11:31 -- ftl/bdevperf.sh@41 -- # remove_shm 00:16:28.692 14:11:31 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:28.692 14:11:31 -- ftl/common.sh@205 -- # rm -f rm -f 00:16:28.692 14:11:31 -- ftl/common.sh@206 -- # rm -f rm -f 00:16:28.692 14:11:31 -- ftl/common.sh@207 -- # rm -f rm -f 00:16:28.692 14:11:31 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:28.692 14:11:31 -- ftl/common.sh@209 -- # rm -f rm -f 00:16:28.692 ************************************ 00:16:28.692 END TEST ftl_bdevperf 00:16:28.692 ************************************ 00:16:28.692 00:16:28.692 real 0m24.548s 00:16:28.692 user 0m26.793s 00:16:28.692 sys 0m1.045s 00:16:28.692 14:11:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:28.692 14:11:31 -- common/autotest_common.sh@10 -- # set +x 00:16:28.692 14:11:31 -- ftl/ftl.sh@76 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:07.0 0000:00:06.0 00:16:28.692 14:11:31 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:16:28.692 14:11:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:28.692 14:11:31 -- common/autotest_common.sh@10 -- # set +x 00:16:28.692 ************************************ 00:16:28.692 START TEST ftl_trim 00:16:28.692 ************************************ 00:16:28.692 14:11:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:07.0 0000:00:06.0 00:16:28.692 * Looking for test storage... 00:16:28.692 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:28.692 14:11:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:28.692 14:11:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:28.692 14:11:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:28.692 14:11:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:28.692 14:11:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:28.692 14:11:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:28.692 14:11:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:28.692 14:11:31 -- scripts/common.sh@335 -- # IFS=.-: 00:16:28.692 14:11:31 -- scripts/common.sh@335 -- # read -ra ver1 00:16:28.692 14:11:31 -- scripts/common.sh@336 -- # IFS=.-: 00:16:28.692 14:11:31 -- scripts/common.sh@336 -- # read -ra ver2 00:16:28.693 14:11:31 -- scripts/common.sh@337 -- # local 'op=<' 00:16:28.693 14:11:31 -- scripts/common.sh@339 -- # ver1_l=2 00:16:28.693 14:11:31 -- scripts/common.sh@340 -- # ver2_l=1 00:16:28.693 14:11:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:28.693 14:11:31 -- scripts/common.sh@343 -- # case "$op" in 00:16:28.693 14:11:31 -- scripts/common.sh@344 -- # : 1 00:16:28.693 14:11:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:28.693 14:11:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:28.693 14:11:31 -- scripts/common.sh@364 -- # decimal 1 00:16:28.955 14:11:31 -- scripts/common.sh@352 -- # local d=1 00:16:28.955 14:11:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:28.955 14:11:31 -- scripts/common.sh@354 -- # echo 1 00:16:28.955 14:11:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:28.955 14:11:31 -- scripts/common.sh@365 -- # decimal 2 00:16:28.955 14:11:31 -- scripts/common.sh@352 -- # local d=2 00:16:28.955 14:11:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:28.955 14:11:31 -- scripts/common.sh@354 -- # echo 2 00:16:28.955 14:11:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:28.955 14:11:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:28.955 14:11:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:28.955 14:11:31 -- scripts/common.sh@367 -- # return 0 00:16:28.955 14:11:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:28.955 14:11:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:28.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.955 --rc genhtml_branch_coverage=1 00:16:28.955 --rc genhtml_function_coverage=1 00:16:28.955 --rc genhtml_legend=1 00:16:28.955 --rc geninfo_all_blocks=1 00:16:28.955 --rc geninfo_unexecuted_blocks=1 00:16:28.955 00:16:28.955 ' 00:16:28.955 14:11:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:28.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.955 --rc genhtml_branch_coverage=1 00:16:28.955 --rc genhtml_function_coverage=1 00:16:28.955 --rc genhtml_legend=1 00:16:28.955 --rc geninfo_all_blocks=1 00:16:28.955 --rc geninfo_unexecuted_blocks=1 00:16:28.955 00:16:28.955 ' 00:16:28.955 14:11:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:28.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.955 --rc genhtml_branch_coverage=1 00:16:28.955 --rc genhtml_function_coverage=1 00:16:28.955 --rc genhtml_legend=1 00:16:28.955 --rc geninfo_all_blocks=1 00:16:28.955 --rc geninfo_unexecuted_blocks=1 00:16:28.955 00:16:28.955 ' 00:16:28.955 14:11:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:28.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.955 --rc genhtml_branch_coverage=1 00:16:28.955 --rc genhtml_function_coverage=1 00:16:28.955 --rc genhtml_legend=1 00:16:28.955 --rc geninfo_all_blocks=1 00:16:28.955 --rc geninfo_unexecuted_blocks=1 00:16:28.955 00:16:28.955 ' 00:16:28.955 14:11:31 -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:28.955 14:11:31 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:16:28.955 14:11:31 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:28.955 14:11:31 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:28.955 14:11:31 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:28.955 14:11:31 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:28.955 14:11:31 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:28.955 14:11:31 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:28.955 14:11:31 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:28.955 14:11:31 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:28.955 14:11:31 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:28.955 14:11:31 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:28.955 14:11:31 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:28.955 14:11:31 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:28.955 14:11:31 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:28.955 14:11:31 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:28.955 14:11:31 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:28.955 14:11:31 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:28.955 14:11:31 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:28.955 14:11:31 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:28.955 14:11:31 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:28.955 14:11:31 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:28.955 14:11:31 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:28.955 14:11:31 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:28.955 14:11:31 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:28.955 14:11:31 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:28.955 14:11:31 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:28.955 14:11:31 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:28.955 14:11:31 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:28.955 14:11:31 -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:28.955 14:11:31 -- ftl/trim.sh@23 -- # device=0000:00:07.0 00:16:28.955 14:11:31 -- ftl/trim.sh@24 -- # cache_device=0000:00:06.0 00:16:28.955 14:11:31 -- ftl/trim.sh@25 -- # timeout=240 00:16:28.955 14:11:31 -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:16:28.955 14:11:31 -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:16:28.955 14:11:31 -- ftl/trim.sh@29 -- # [[ y != y ]] 00:16:28.955 14:11:31 -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:16:28.955 14:11:31 -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:16:28.955 14:11:31 -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:28.955 14:11:31 -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:28.955 14:11:31 -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:28.955 14:11:31 -- ftl/trim.sh@40 -- # svcpid=71934 00:16:28.955 14:11:31 -- ftl/trim.sh@41 -- # waitforlisten 71934 00:16:28.955 14:11:31 -- common/autotest_common.sh@829 -- # '[' -z 71934 ']' 00:16:28.955 14:11:31 -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:16:28.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:28.955 14:11:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:28.955 14:11:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:28.955 14:11:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:28.955 14:11:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:28.955 14:11:31 -- common/autotest_common.sh@10 -- # set +x 00:16:28.955 [2024-12-08 14:11:31.729785] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:28.955 [2024-12-08 14:11:31.730131] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71934 ] 00:16:29.218 [2024-12-08 14:11:31.887391] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:29.218 [2024-12-08 14:11:32.114383] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:29.218 [2024-12-08 14:11:32.115139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:29.218 [2024-12-08 14:11:32.115598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:29.218 [2024-12-08 14:11:32.115685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.606 14:11:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:30.606 14:11:33 -- common/autotest_common.sh@862 -- # return 0 00:16:30.606 14:11:33 -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:16:30.606 14:11:33 -- ftl/common.sh@54 -- # local name=nvme0 00:16:30.606 14:11:33 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:16:30.606 14:11:33 -- ftl/common.sh@56 -- # local size=103424 00:16:30.606 14:11:33 -- ftl/common.sh@59 -- # local base_bdev 00:16:30.606 14:11:33 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:16:30.866 14:11:33 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:30.866 14:11:33 -- ftl/common.sh@62 -- # local base_size 00:16:30.866 14:11:33 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:30.866 14:11:33 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:16:30.866 14:11:33 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:30.866 14:11:33 -- common/autotest_common.sh@1369 -- # local bs 00:16:30.866 14:11:33 -- common/autotest_common.sh@1370 -- # local nb 00:16:30.866 14:11:33 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:30.866 14:11:33 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:30.866 { 00:16:30.866 "name": "nvme0n1", 00:16:30.866 "aliases": [ 00:16:30.866 "446552f1-889b-4a05-87ef-209e54962eef" 00:16:30.866 ], 00:16:30.866 "product_name": "NVMe disk", 00:16:30.867 "block_size": 4096, 00:16:30.867 "num_blocks": 1310720, 00:16:30.867 "uuid": "446552f1-889b-4a05-87ef-209e54962eef", 00:16:30.867 "assigned_rate_limits": { 00:16:30.867 "rw_ios_per_sec": 0, 00:16:30.867 "rw_mbytes_per_sec": 0, 00:16:30.867 "r_mbytes_per_sec": 0, 00:16:30.867 "w_mbytes_per_sec": 0 00:16:30.867 }, 00:16:30.867 "claimed": true, 00:16:30.867 "claim_type": "read_many_write_one", 00:16:30.867 "zoned": false, 00:16:30.867 "supported_io_types": { 00:16:30.867 "read": true, 00:16:30.867 "write": true, 00:16:30.867 "unmap": true, 00:16:30.867 "write_zeroes": true, 00:16:30.867 "flush": true, 00:16:30.867 "reset": true, 00:16:30.867 "compare": true, 00:16:30.867 "compare_and_write": false, 00:16:30.867 "abort": true, 00:16:30.867 "nvme_admin": true, 00:16:30.867 "nvme_io": true 00:16:30.867 }, 00:16:30.867 "driver_specific": { 00:16:30.867 "nvme": [ 00:16:30.867 { 00:16:30.867 "pci_address": "0000:00:07.0", 00:16:30.867 "trid": { 00:16:30.867 "trtype": "PCIe", 00:16:30.867 "traddr": "0000:00:07.0" 00:16:30.867 }, 00:16:30.867 "ctrlr_data": { 00:16:30.867 "cntlid": 0, 00:16:30.867 "vendor_id": "0x1b36", 00:16:30.867 "model_number": "QEMU NVMe Ctrl", 00:16:30.867 "serial_number": "12341", 00:16:30.867 "firmware_revision": "8.0.0", 00:16:30.867 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:30.867 "oacs": { 00:16:30.867 "security": 0, 00:16:30.867 "format": 1, 00:16:30.867 "firmware": 0, 00:16:30.867 "ns_manage": 1 00:16:30.867 }, 00:16:30.867 "multi_ctrlr": false, 00:16:30.867 "ana_reporting": false 00:16:30.867 }, 00:16:30.867 "vs": { 00:16:30.867 "nvme_version": "1.4" 00:16:30.867 }, 00:16:30.867 "ns_data": { 00:16:30.867 "id": 1, 00:16:30.867 "can_share": false 00:16:30.867 } 00:16:30.867 } 00:16:30.867 ], 00:16:30.867 "mp_policy": "active_passive" 00:16:30.867 } 00:16:30.867 } 00:16:30.867 ]' 00:16:30.867 14:11:33 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:30.867 14:11:33 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:30.867 14:11:33 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:31.125 14:11:33 -- common/autotest_common.sh@1373 -- # nb=1310720 00:16:31.125 14:11:33 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:16:31.125 14:11:33 -- common/autotest_common.sh@1377 -- # echo 5120 00:16:31.125 14:11:33 -- ftl/common.sh@63 -- # base_size=5120 00:16:31.125 14:11:33 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:31.125 14:11:33 -- ftl/common.sh@67 -- # clear_lvols 00:16:31.125 14:11:33 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:31.125 14:11:33 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:31.125 14:11:34 -- ftl/common.sh@28 -- # stores=f1169c89-b298-4c12-aea3-423fd16050f1 00:16:31.125 14:11:34 -- ftl/common.sh@29 -- # for lvs in $stores 00:16:31.125 14:11:34 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f1169c89-b298-4c12-aea3-423fd16050f1 00:16:31.384 14:11:34 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:31.642 14:11:34 -- ftl/common.sh@68 -- # lvs=25b17777-e6ea-40ac-aa51-b76afed73337 00:16:31.642 14:11:34 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 25b17777-e6ea-40ac-aa51-b76afed73337 00:16:31.900 14:11:34 -- ftl/trim.sh@43 -- # split_bdev=7b0f85ae-c949-4d37-a5d8-3473ee19959b 00:16:31.900 14:11:34 -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:06.0 7b0f85ae-c949-4d37-a5d8-3473ee19959b 00:16:31.900 14:11:34 -- ftl/common.sh@35 -- # local name=nvc0 00:16:31.900 14:11:34 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:16:31.900 14:11:34 -- ftl/common.sh@37 -- # local base_bdev=7b0f85ae-c949-4d37-a5d8-3473ee19959b 00:16:31.900 14:11:34 -- ftl/common.sh@38 -- # local cache_size= 00:16:31.900 14:11:34 -- ftl/common.sh@41 -- # get_bdev_size 7b0f85ae-c949-4d37-a5d8-3473ee19959b 00:16:31.900 14:11:34 -- common/autotest_common.sh@1367 -- # local bdev_name=7b0f85ae-c949-4d37-a5d8-3473ee19959b 00:16:31.900 14:11:34 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:31.900 14:11:34 -- common/autotest_common.sh@1369 -- # local bs 00:16:31.900 14:11:34 -- common/autotest_common.sh@1370 -- # local nb 00:16:31.900 14:11:34 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7b0f85ae-c949-4d37-a5d8-3473ee19959b 00:16:31.900 14:11:34 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:31.900 { 00:16:31.900 "name": "7b0f85ae-c949-4d37-a5d8-3473ee19959b", 00:16:31.900 "aliases": [ 00:16:31.900 "lvs/nvme0n1p0" 00:16:31.900 ], 00:16:31.900 "product_name": "Logical Volume", 00:16:31.900 "block_size": 4096, 00:16:31.900 "num_blocks": 26476544, 00:16:31.900 "uuid": "7b0f85ae-c949-4d37-a5d8-3473ee19959b", 00:16:31.900 "assigned_rate_limits": { 00:16:31.900 "rw_ios_per_sec": 0, 00:16:31.900 "rw_mbytes_per_sec": 0, 00:16:31.900 "r_mbytes_per_sec": 0, 00:16:31.900 "w_mbytes_per_sec": 0 00:16:31.900 }, 00:16:31.900 "claimed": false, 00:16:31.900 "zoned": false, 00:16:31.900 "supported_io_types": { 00:16:31.900 "read": true, 00:16:31.900 "write": true, 00:16:31.900 "unmap": true, 00:16:31.900 "write_zeroes": true, 00:16:31.900 "flush": false, 00:16:31.900 "reset": true, 00:16:31.900 "compare": false, 00:16:31.900 "compare_and_write": false, 00:16:31.900 "abort": false, 00:16:31.900 "nvme_admin": false, 00:16:31.900 "nvme_io": false 00:16:31.900 }, 00:16:31.900 "driver_specific": { 00:16:31.900 "lvol": { 00:16:31.900 "lvol_store_uuid": "25b17777-e6ea-40ac-aa51-b76afed73337", 00:16:31.900 "base_bdev": "nvme0n1", 00:16:31.900 "thin_provision": true, 00:16:31.900 "snapshot": false, 00:16:31.900 "clone": false, 00:16:31.900 "esnap_clone": false 00:16:31.900 } 00:16:31.900 } 00:16:31.900 } 00:16:31.900 ]' 00:16:31.900 14:11:34 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:32.157 14:11:34 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:32.157 14:11:34 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:32.157 14:11:34 -- common/autotest_common.sh@1373 -- # nb=26476544 00:16:32.157 14:11:34 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:16:32.157 14:11:34 -- common/autotest_common.sh@1377 -- # echo 103424 00:16:32.157 14:11:34 -- ftl/common.sh@41 -- # local base_size=5171 00:16:32.157 14:11:34 -- ftl/common.sh@44 -- # local nvc_bdev 00:16:32.157 14:11:34 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:16:32.413 14:11:35 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:32.413 14:11:35 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:32.413 14:11:35 -- ftl/common.sh@48 -- # get_bdev_size 7b0f85ae-c949-4d37-a5d8-3473ee19959b 00:16:32.413 14:11:35 -- common/autotest_common.sh@1367 -- # local bdev_name=7b0f85ae-c949-4d37-a5d8-3473ee19959b 00:16:32.413 14:11:35 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:32.413 14:11:35 -- common/autotest_common.sh@1369 -- # local bs 00:16:32.413 14:11:35 -- common/autotest_common.sh@1370 -- # local nb 00:16:32.413 14:11:35 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7b0f85ae-c949-4d37-a5d8-3473ee19959b 00:16:32.413 14:11:35 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:32.413 { 00:16:32.413 "name": "7b0f85ae-c949-4d37-a5d8-3473ee19959b", 00:16:32.413 "aliases": [ 00:16:32.413 "lvs/nvme0n1p0" 00:16:32.413 ], 00:16:32.413 "product_name": "Logical Volume", 00:16:32.413 "block_size": 4096, 00:16:32.413 "num_blocks": 26476544, 00:16:32.413 "uuid": "7b0f85ae-c949-4d37-a5d8-3473ee19959b", 00:16:32.413 "assigned_rate_limits": { 00:16:32.413 "rw_ios_per_sec": 0, 00:16:32.413 "rw_mbytes_per_sec": 0, 00:16:32.413 "r_mbytes_per_sec": 0, 00:16:32.413 "w_mbytes_per_sec": 0 00:16:32.413 }, 00:16:32.413 "claimed": false, 00:16:32.413 "zoned": false, 00:16:32.413 "supported_io_types": { 00:16:32.413 "read": true, 00:16:32.413 "write": true, 00:16:32.413 "unmap": true, 00:16:32.413 "write_zeroes": true, 00:16:32.413 "flush": false, 00:16:32.413 "reset": true, 00:16:32.413 "compare": false, 00:16:32.413 "compare_and_write": false, 00:16:32.413 "abort": false, 00:16:32.413 "nvme_admin": false, 00:16:32.413 "nvme_io": false 00:16:32.413 }, 00:16:32.413 "driver_specific": { 00:16:32.413 "lvol": { 00:16:32.413 "lvol_store_uuid": "25b17777-e6ea-40ac-aa51-b76afed73337", 00:16:32.413 "base_bdev": "nvme0n1", 00:16:32.413 "thin_provision": true, 00:16:32.413 "snapshot": false, 00:16:32.414 "clone": false, 00:16:32.414 "esnap_clone": false 00:16:32.414 } 00:16:32.414 } 00:16:32.414 } 00:16:32.414 ]' 00:16:32.414 14:11:35 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:32.414 14:11:35 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:32.414 14:11:35 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:32.671 14:11:35 -- common/autotest_common.sh@1373 -- # nb=26476544 00:16:32.671 14:11:35 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:16:32.671 14:11:35 -- common/autotest_common.sh@1377 -- # echo 103424 00:16:32.671 14:11:35 -- ftl/common.sh@48 -- # cache_size=5171 00:16:32.671 14:11:35 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:32.671 14:11:35 -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:16:32.671 14:11:35 -- ftl/trim.sh@46 -- # l2p_percentage=60 00:16:32.671 14:11:35 -- ftl/trim.sh@47 -- # get_bdev_size 7b0f85ae-c949-4d37-a5d8-3473ee19959b 00:16:32.671 14:11:35 -- common/autotest_common.sh@1367 -- # local bdev_name=7b0f85ae-c949-4d37-a5d8-3473ee19959b 00:16:32.671 14:11:35 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:32.671 14:11:35 -- common/autotest_common.sh@1369 -- # local bs 00:16:32.671 14:11:35 -- common/autotest_common.sh@1370 -- # local nb 00:16:32.671 14:11:35 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7b0f85ae-c949-4d37-a5d8-3473ee19959b 00:16:32.930 14:11:35 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:32.930 { 00:16:32.930 "name": "7b0f85ae-c949-4d37-a5d8-3473ee19959b", 00:16:32.930 "aliases": [ 00:16:32.930 "lvs/nvme0n1p0" 00:16:32.930 ], 00:16:32.930 "product_name": "Logical Volume", 00:16:32.930 "block_size": 4096, 00:16:32.930 "num_blocks": 26476544, 00:16:32.930 "uuid": "7b0f85ae-c949-4d37-a5d8-3473ee19959b", 00:16:32.930 "assigned_rate_limits": { 00:16:32.930 "rw_ios_per_sec": 0, 00:16:32.930 "rw_mbytes_per_sec": 0, 00:16:32.930 "r_mbytes_per_sec": 0, 00:16:32.930 "w_mbytes_per_sec": 0 00:16:32.930 }, 00:16:32.930 "claimed": false, 00:16:32.930 "zoned": false, 00:16:32.930 "supported_io_types": { 00:16:32.930 "read": true, 00:16:32.930 "write": true, 00:16:32.930 "unmap": true, 00:16:32.930 "write_zeroes": true, 00:16:32.930 "flush": false, 00:16:32.930 "reset": true, 00:16:32.930 "compare": false, 00:16:32.930 "compare_and_write": false, 00:16:32.930 "abort": false, 00:16:32.930 "nvme_admin": false, 00:16:32.930 "nvme_io": false 00:16:32.930 }, 00:16:32.930 "driver_specific": { 00:16:32.930 "lvol": { 00:16:32.930 "lvol_store_uuid": "25b17777-e6ea-40ac-aa51-b76afed73337", 00:16:32.930 "base_bdev": "nvme0n1", 00:16:32.930 "thin_provision": true, 00:16:32.930 "snapshot": false, 00:16:32.930 "clone": false, 00:16:32.930 "esnap_clone": false 00:16:32.930 } 00:16:32.930 } 00:16:32.930 } 00:16:32.930 ]' 00:16:32.930 14:11:35 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:32.930 14:11:35 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:32.930 14:11:35 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:32.930 14:11:35 -- common/autotest_common.sh@1373 -- # nb=26476544 00:16:32.930 14:11:35 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:16:32.930 14:11:35 -- common/autotest_common.sh@1377 -- # echo 103424 00:16:32.930 14:11:35 -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:16:32.930 14:11:35 -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 7b0f85ae-c949-4d37-a5d8-3473ee19959b -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:16:33.188 [2024-12-08 14:11:35.929276] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.188 [2024-12-08 14:11:35.929312] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:33.188 [2024-12-08 14:11:35.929325] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:33.188 [2024-12-08 14:11:35.929331] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.188 [2024-12-08 14:11:35.931525] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.188 [2024-12-08 14:11:35.931644] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:33.188 [2024-12-08 14:11:35.931661] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.170 ms 00:16:33.188 [2024-12-08 14:11:35.931667] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.188 [2024-12-08 14:11:35.931739] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:33.188 [2024-12-08 14:11:35.932324] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:33.188 [2024-12-08 14:11:35.932344] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.188 [2024-12-08 14:11:35.932350] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:33.188 [2024-12-08 14:11:35.932358] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.609 ms 00:16:33.188 [2024-12-08 14:11:35.932364] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.188 [2024-12-08 14:11:35.932683] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 1c62f382-c384-45a2-bc0e-3a3545a6a62f 00:16:33.189 [2024-12-08 14:11:35.933673] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.189 [2024-12-08 14:11:35.933701] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:33.189 [2024-12-08 14:11:35.933709] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:16:33.189 [2024-12-08 14:11:35.933717] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.189 [2024-12-08 14:11:35.939037] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.189 [2024-12-08 14:11:35.939120] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:33.189 [2024-12-08 14:11:35.939162] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.245 ms 00:16:33.189 [2024-12-08 14:11:35.939181] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.189 [2024-12-08 14:11:35.939293] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.189 [2024-12-08 14:11:35.939319] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:33.189 [2024-12-08 14:11:35.939335] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:16:33.189 [2024-12-08 14:11:35.939353] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.189 [2024-12-08 14:11:35.939393] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.189 [2024-12-08 14:11:35.939415] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:33.189 [2024-12-08 14:11:35.939434] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:33.189 [2024-12-08 14:11:35.939750] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.189 [2024-12-08 14:11:35.939862] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:33.189 [2024-12-08 14:11:35.942932] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.189 [2024-12-08 14:11:35.943026] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:33.189 [2024-12-08 14:11:35.943075] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.077 ms 00:16:33.189 [2024-12-08 14:11:35.943096] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.189 [2024-12-08 14:11:35.943174] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.189 [2024-12-08 14:11:35.943196] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:33.189 [2024-12-08 14:11:35.943213] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:33.189 [2024-12-08 14:11:35.943228] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.189 [2024-12-08 14:11:35.943284] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:33.189 [2024-12-08 14:11:35.943483] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:33.189 [2024-12-08 14:11:35.943520] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:33.189 [2024-12-08 14:11:35.943550] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:33.189 [2024-12-08 14:11:35.943579] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:33.189 [2024-12-08 14:11:35.943606] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:33.189 [2024-12-08 14:11:35.943681] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:33.189 [2024-12-08 14:11:35.943697] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:33.189 [2024-12-08 14:11:35.943714] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:33.189 [2024-12-08 14:11:35.943729] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:33.189 [2024-12-08 14:11:35.943746] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.189 [2024-12-08 14:11:35.943761] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:33.189 [2024-12-08 14:11:35.943777] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.463 ms 00:16:33.189 [2024-12-08 14:11:35.943815] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.189 [2024-12-08 14:11:35.943894] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.189 [2024-12-08 14:11:35.943913] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:33.189 [2024-12-08 14:11:35.944065] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:16:33.189 [2024-12-08 14:11:35.944104] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.189 [2024-12-08 14:11:35.944275] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:33.189 [2024-12-08 14:11:35.944316] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:33.189 [2024-12-08 14:11:35.944354] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:33.189 [2024-12-08 14:11:35.944430] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:33.189 [2024-12-08 14:11:35.944504] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:33.189 [2024-12-08 14:11:35.944519] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:33.189 [2024-12-08 14:11:35.944533] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:33.189 [2024-12-08 14:11:35.944543] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:33.189 [2024-12-08 14:11:35.944555] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:33.189 [2024-12-08 14:11:35.944565] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:33.189 [2024-12-08 14:11:35.944577] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:33.189 [2024-12-08 14:11:35.944587] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:33.189 [2024-12-08 14:11:35.944598] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:33.189 [2024-12-08 14:11:35.944608] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:33.189 [2024-12-08 14:11:35.944622] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:33.189 [2024-12-08 14:11:35.944632] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:33.189 [2024-12-08 14:11:35.944646] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:33.189 [2024-12-08 14:11:35.944655] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:33.189 [2024-12-08 14:11:35.944668] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:33.189 [2024-12-08 14:11:35.944677] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:33.189 [2024-12-08 14:11:35.944689] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:33.189 [2024-12-08 14:11:35.944699] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:33.189 [2024-12-08 14:11:35.944711] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:33.189 [2024-12-08 14:11:35.944721] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:33.189 [2024-12-08 14:11:35.944732] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:33.189 [2024-12-08 14:11:35.944742] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:33.189 [2024-12-08 14:11:35.944754] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:33.189 [2024-12-08 14:11:35.944763] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:33.189 [2024-12-08 14:11:35.944774] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:33.189 [2024-12-08 14:11:35.944783] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:33.189 [2024-12-08 14:11:35.944798] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:33.189 [2024-12-08 14:11:35.944808] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:33.189 [2024-12-08 14:11:35.944821] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:33.189 [2024-12-08 14:11:35.944831] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:33.189 [2024-12-08 14:11:35.944842] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:33.189 [2024-12-08 14:11:35.944852] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:33.189 [2024-12-08 14:11:35.944863] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:33.189 [2024-12-08 14:11:35.944873] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:33.189 [2024-12-08 14:11:35.944884] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:33.189 [2024-12-08 14:11:35.944893] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:33.189 [2024-12-08 14:11:35.944907] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:33.189 [2024-12-08 14:11:35.944918] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:33.189 [2024-12-08 14:11:35.944931] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:33.189 [2024-12-08 14:11:35.944941] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:33.189 [2024-12-08 14:11:35.944956] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:33.189 [2024-12-08 14:11:35.944966] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:33.189 [2024-12-08 14:11:35.944978] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:33.189 [2024-12-08 14:11:35.945011] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:33.189 [2024-12-08 14:11:35.945025] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:33.189 [2024-12-08 14:11:35.945035] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:33.189 [2024-12-08 14:11:35.945049] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:33.189 [2024-12-08 14:11:35.945064] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:33.189 [2024-12-08 14:11:35.945078] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:33.189 [2024-12-08 14:11:35.945089] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:33.189 [2024-12-08 14:11:35.945102] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:33.189 [2024-12-08 14:11:35.945113] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:33.190 [2024-12-08 14:11:35.945125] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:33.190 [2024-12-08 14:11:35.945136] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:33.190 [2024-12-08 14:11:35.945149] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:33.190 [2024-12-08 14:11:35.945160] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:33.190 [2024-12-08 14:11:35.945172] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:33.190 [2024-12-08 14:11:35.945182] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:33.190 [2024-12-08 14:11:35.945195] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:33.190 [2024-12-08 14:11:35.945218] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:33.190 [2024-12-08 14:11:35.945236] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:33.190 [2024-12-08 14:11:35.945247] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:33.190 [2024-12-08 14:11:35.945261] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:33.190 [2024-12-08 14:11:35.945273] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:33.190 [2024-12-08 14:11:35.945286] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:33.190 [2024-12-08 14:11:35.945296] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:33.190 [2024-12-08 14:11:35.945309] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:33.190 [2024-12-08 14:11:35.945321] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.190 [2024-12-08 14:11:35.945334] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:33.190 [2024-12-08 14:11:35.945345] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.117 ms 00:16:33.190 [2024-12-08 14:11:35.945357] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.190 [2024-12-08 14:11:35.961974] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.190 [2024-12-08 14:11:35.962011] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:33.190 [2024-12-08 14:11:35.962019] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.511 ms 00:16:33.190 [2024-12-08 14:11:35.962026] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.190 [2024-12-08 14:11:35.962124] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.190 [2024-12-08 14:11:35.962138] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:33.190 [2024-12-08 14:11:35.962146] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:16:33.190 [2024-12-08 14:11:35.962152] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.190 [2024-12-08 14:11:35.987637] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.190 [2024-12-08 14:11:35.987738] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:33.190 [2024-12-08 14:11:35.987784] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.459 ms 00:16:33.190 [2024-12-08 14:11:35.987807] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.190 [2024-12-08 14:11:35.987877] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.190 [2024-12-08 14:11:35.988166] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:33.190 [2024-12-08 14:11:35.988207] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:16:33.190 [2024-12-08 14:11:35.988234] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.190 [2024-12-08 14:11:35.988594] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.190 [2024-12-08 14:11:35.988674] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:33.190 [2024-12-08 14:11:35.988716] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:16:33.190 [2024-12-08 14:11:35.988736] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.190 [2024-12-08 14:11:35.988838] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.190 [2024-12-08 14:11:35.989004] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:33.190 [2024-12-08 14:11:35.989027] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:16:33.190 [2024-12-08 14:11:35.989043] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.190 [2024-12-08 14:11:36.020917] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.190 [2024-12-08 14:11:36.021018] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:33.190 [2024-12-08 14:11:36.021062] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.835 ms 00:16:33.190 [2024-12-08 14:11:36.021082] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.190 [2024-12-08 14:11:36.030533] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:33.190 [2024-12-08 14:11:36.043223] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.190 [2024-12-08 14:11:36.043248] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:33.190 [2024-12-08 14:11:36.043261] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.776 ms 00:16:33.190 [2024-12-08 14:11:36.043267] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.447 [2024-12-08 14:11:36.114594] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:33.447 [2024-12-08 14:11:36.114623] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:33.447 [2024-12-08 14:11:36.114635] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.272 ms 00:16:33.447 [2024-12-08 14:11:36.114643] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:33.447 [2024-12-08 14:11:36.114710] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:16:33.447 [2024-12-08 14:11:36.114721] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:16:35.976 [2024-12-08 14:11:38.601000] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.976 [2024-12-08 14:11:38.601047] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:35.976 [2024-12-08 14:11:38.601064] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2486.278 ms 00:16:35.976 [2024-12-08 14:11:38.601073] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.976 [2024-12-08 14:11:38.601308] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.976 [2024-12-08 14:11:38.601325] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:35.976 [2024-12-08 14:11:38.601336] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:16:35.976 [2024-12-08 14:11:38.601344] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.976 [2024-12-08 14:11:38.624826] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.976 [2024-12-08 14:11:38.624870] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:35.976 [2024-12-08 14:11:38.624885] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.445 ms 00:16:35.976 [2024-12-08 14:11:38.624893] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.976 [2024-12-08 14:11:38.647786] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.976 [2024-12-08 14:11:38.647813] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:35.976 [2024-12-08 14:11:38.647828] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.828 ms 00:16:35.976 [2024-12-08 14:11:38.647835] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.976 [2024-12-08 14:11:38.648190] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.976 [2024-12-08 14:11:38.648205] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:35.976 [2024-12-08 14:11:38.648216] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:16:35.976 [2024-12-08 14:11:38.648224] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.976 [2024-12-08 14:11:38.709749] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.976 [2024-12-08 14:11:38.709778] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:35.976 [2024-12-08 14:11:38.709792] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.493 ms 00:16:35.976 [2024-12-08 14:11:38.709800] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.976 [2024-12-08 14:11:38.734111] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.976 [2024-12-08 14:11:38.734139] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:35.976 [2024-12-08 14:11:38.734151] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.236 ms 00:16:35.976 [2024-12-08 14:11:38.734159] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.976 [2024-12-08 14:11:38.738084] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.976 [2024-12-08 14:11:38.738114] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:35.976 [2024-12-08 14:11:38.738127] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.862 ms 00:16:35.976 [2024-12-08 14:11:38.738135] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.976 [2024-12-08 14:11:38.761926] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.976 [2024-12-08 14:11:38.762069] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:35.976 [2024-12-08 14:11:38.762139] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.739 ms 00:16:35.976 [2024-12-08 14:11:38.762166] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.976 [2024-12-08 14:11:38.762247] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.976 [2024-12-08 14:11:38.762276] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:35.976 [2024-12-08 14:11:38.762302] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:35.976 [2024-12-08 14:11:38.762374] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.976 [2024-12-08 14:11:38.762482] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.976 [2024-12-08 14:11:38.762521] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:35.976 [2024-12-08 14:11:38.762547] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:16:35.976 [2024-12-08 14:11:38.762682] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.976 [2024-12-08 14:11:38.763805] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:35.976 [2024-12-08 14:11:38.767098] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2834.241 ms, result 0 00:16:35.976 [2024-12-08 14:11:38.767994] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:35.976 { 00:16:35.976 "name": "ftl0", 00:16:35.976 "uuid": "1c62f382-c384-45a2-bc0e-3a3545a6a62f" 00:16:35.976 } 00:16:35.976 14:11:38 -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:16:35.976 14:11:38 -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:16:35.976 14:11:38 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:35.976 14:11:38 -- common/autotest_common.sh@899 -- # local i 00:16:35.976 14:11:38 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:35.976 14:11:38 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:35.976 14:11:38 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:36.257 14:11:38 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:16:36.257 [ 00:16:36.257 { 00:16:36.257 "name": "ftl0", 00:16:36.257 "aliases": [ 00:16:36.257 "1c62f382-c384-45a2-bc0e-3a3545a6a62f" 00:16:36.257 ], 00:16:36.257 "product_name": "FTL disk", 00:16:36.257 "block_size": 4096, 00:16:36.257 "num_blocks": 23592960, 00:16:36.257 "uuid": "1c62f382-c384-45a2-bc0e-3a3545a6a62f", 00:16:36.257 "assigned_rate_limits": { 00:16:36.257 "rw_ios_per_sec": 0, 00:16:36.257 "rw_mbytes_per_sec": 0, 00:16:36.257 "r_mbytes_per_sec": 0, 00:16:36.257 "w_mbytes_per_sec": 0 00:16:36.257 }, 00:16:36.257 "claimed": false, 00:16:36.257 "zoned": false, 00:16:36.257 "supported_io_types": { 00:16:36.257 "read": true, 00:16:36.257 "write": true, 00:16:36.257 "unmap": true, 00:16:36.257 "write_zeroes": true, 00:16:36.257 "flush": true, 00:16:36.257 "reset": false, 00:16:36.257 "compare": false, 00:16:36.257 "compare_and_write": false, 00:16:36.257 "abort": false, 00:16:36.257 "nvme_admin": false, 00:16:36.257 "nvme_io": false 00:16:36.257 }, 00:16:36.257 "driver_specific": { 00:16:36.257 "ftl": { 00:16:36.257 "base_bdev": "7b0f85ae-c949-4d37-a5d8-3473ee19959b", 00:16:36.257 "cache": "nvc0n1p0" 00:16:36.257 } 00:16:36.257 } 00:16:36.257 } 00:16:36.257 ] 00:16:36.527 14:11:39 -- common/autotest_common.sh@905 -- # return 0 00:16:36.527 14:11:39 -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:16:36.527 14:11:39 -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:36.527 14:11:39 -- ftl/trim.sh@56 -- # echo ']}' 00:16:36.527 14:11:39 -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:16:36.784 14:11:39 -- ftl/trim.sh@59 -- # bdev_info='[ 00:16:36.784 { 00:16:36.784 "name": "ftl0", 00:16:36.784 "aliases": [ 00:16:36.784 "1c62f382-c384-45a2-bc0e-3a3545a6a62f" 00:16:36.785 ], 00:16:36.785 "product_name": "FTL disk", 00:16:36.785 "block_size": 4096, 00:16:36.785 "num_blocks": 23592960, 00:16:36.785 "uuid": "1c62f382-c384-45a2-bc0e-3a3545a6a62f", 00:16:36.785 "assigned_rate_limits": { 00:16:36.785 "rw_ios_per_sec": 0, 00:16:36.785 "rw_mbytes_per_sec": 0, 00:16:36.785 "r_mbytes_per_sec": 0, 00:16:36.785 "w_mbytes_per_sec": 0 00:16:36.785 }, 00:16:36.785 "claimed": false, 00:16:36.785 "zoned": false, 00:16:36.785 "supported_io_types": { 00:16:36.785 "read": true, 00:16:36.785 "write": true, 00:16:36.785 "unmap": true, 00:16:36.785 "write_zeroes": true, 00:16:36.785 "flush": true, 00:16:36.785 "reset": false, 00:16:36.785 "compare": false, 00:16:36.785 "compare_and_write": false, 00:16:36.785 "abort": false, 00:16:36.785 "nvme_admin": false, 00:16:36.785 "nvme_io": false 00:16:36.785 }, 00:16:36.785 "driver_specific": { 00:16:36.785 "ftl": { 00:16:36.785 "base_bdev": "7b0f85ae-c949-4d37-a5d8-3473ee19959b", 00:16:36.785 "cache": "nvc0n1p0" 00:16:36.785 } 00:16:36.785 } 00:16:36.785 } 00:16:36.785 ]' 00:16:36.785 14:11:39 -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:16:36.785 14:11:39 -- ftl/trim.sh@60 -- # nb=23592960 00:16:36.785 14:11:39 -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:37.044 [2024-12-08 14:11:39.743661] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.044 [2024-12-08 14:11:39.743696] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:37.044 [2024-12-08 14:11:39.743706] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:37.044 [2024-12-08 14:11:39.743714] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.044 [2024-12-08 14:11:39.743746] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:37.044 [2024-12-08 14:11:39.745822] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.044 [2024-12-08 14:11:39.745956] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:37.044 [2024-12-08 14:11:39.745977] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.061 ms 00:16:37.044 [2024-12-08 14:11:39.745993] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.044 [2024-12-08 14:11:39.746522] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.044 [2024-12-08 14:11:39.746533] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:37.044 [2024-12-08 14:11:39.746543] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.495 ms 00:16:37.044 [2024-12-08 14:11:39.746549] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.044 [2024-12-08 14:11:39.749384] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.044 [2024-12-08 14:11:39.749442] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:37.044 [2024-12-08 14:11:39.749490] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.809 ms 00:16:37.044 [2024-12-08 14:11:39.749508] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.044 [2024-12-08 14:11:39.754760] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.044 [2024-12-08 14:11:39.754845] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:37.044 [2024-12-08 14:11:39.754923] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.182 ms 00:16:37.044 [2024-12-08 14:11:39.754944] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.044 [2024-12-08 14:11:39.773954] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.044 [2024-12-08 14:11:39.774069] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:37.044 [2024-12-08 14:11:39.774117] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.913 ms 00:16:37.044 [2024-12-08 14:11:39.774136] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.044 [2024-12-08 14:11:39.787091] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.044 [2024-12-08 14:11:39.787186] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:37.044 [2024-12-08 14:11:39.787246] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.901 ms 00:16:37.044 [2024-12-08 14:11:39.787268] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.044 [2024-12-08 14:11:39.787468] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.044 [2024-12-08 14:11:39.787542] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:37.044 [2024-12-08 14:11:39.787568] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:16:37.044 [2024-12-08 14:11:39.787613] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.044 [2024-12-08 14:11:39.806160] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.044 [2024-12-08 14:11:39.806246] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:37.044 [2024-12-08 14:11:39.806291] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.510 ms 00:16:37.044 [2024-12-08 14:11:39.806308] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.044 [2024-12-08 14:11:39.823951] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.044 [2024-12-08 14:11:39.824050] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:37.044 [2024-12-08 14:11:39.824095] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.578 ms 00:16:37.044 [2024-12-08 14:11:39.824113] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.044 [2024-12-08 14:11:39.841349] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.044 [2024-12-08 14:11:39.841433] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:37.044 [2024-12-08 14:11:39.841477] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.181 ms 00:16:37.044 [2024-12-08 14:11:39.841494] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.044 [2024-12-08 14:11:39.859007] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.044 [2024-12-08 14:11:39.859094] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:37.044 [2024-12-08 14:11:39.859140] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.412 ms 00:16:37.044 [2024-12-08 14:11:39.859160] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.044 [2024-12-08 14:11:39.859217] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:37.044 [2024-12-08 14:11:39.859243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:37.044 [2024-12-08 14:11:39.859272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:37.044 [2024-12-08 14:11:39.859298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:37.044 [2024-12-08 14:11:39.859325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:37.044 [2024-12-08 14:11:39.859620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:37.044 [2024-12-08 14:11:39.859660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:37.044 [2024-12-08 14:11:39.859687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:37.044 [2024-12-08 14:11:39.859713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:37.044 [2024-12-08 14:11:39.859777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:37.044 [2024-12-08 14:11:39.859806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:37.044 [2024-12-08 14:11:39.859833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:37.044 [2024-12-08 14:11:39.859860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:37.044 [2024-12-08 14:11:39.859907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:37.044 [2024-12-08 14:11:39.859938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:37.044 [2024-12-08 14:11:39.859965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:37.044 [2024-12-08 14:11:39.860112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:37.044 [2024-12-08 14:11:39.860139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:37.044 [2024-12-08 14:11:39.860167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:37.044 [2024-12-08 14:11:39.860192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:37.044 [2024-12-08 14:11:39.860220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:37.044 [2024-12-08 14:11:39.860305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:37.044 [2024-12-08 14:11:39.860335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:37.044 [2024-12-08 14:11:39.860361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:37.044 [2024-12-08 14:11:39.860387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:37.044 [2024-12-08 14:11:39.860413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:37.044 [2024-12-08 14:11:39.860495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:37.044 [2024-12-08 14:11:39.860529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:37.044 [2024-12-08 14:11:39.860556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:37.044 [2024-12-08 14:11:39.860581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.860635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.860715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.860742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.860768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.860794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.860820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.860868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.860917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.860960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.860995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.861023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.861049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.861075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.861100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.861215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.861243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.861273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.861299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.861325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.861350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.861468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.861495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.861523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.861548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.861574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.861680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.861708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.861735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.861759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.861784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.861878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.861905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.861933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.861959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.861993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.862019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.862067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.862074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.862081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.862087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.862095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.862102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.862108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.862114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.862121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.862127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.862134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.862140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.862148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.862154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.862160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.862166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.862174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.862179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.862186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.862192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.862199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.862204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.862211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.862217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.862224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.862230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.862236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.862241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.862250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.862255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.862264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.862269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.862276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.862282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.862288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:37.045 [2024-12-08 14:11:39.862301] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:37.045 [2024-12-08 14:11:39.862308] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1c62f382-c384-45a2-bc0e-3a3545a6a62f 00:16:37.045 [2024-12-08 14:11:39.862314] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:37.045 [2024-12-08 14:11:39.862321] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:37.045 [2024-12-08 14:11:39.862326] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:37.045 [2024-12-08 14:11:39.862333] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:37.045 [2024-12-08 14:11:39.862338] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:37.045 [2024-12-08 14:11:39.862345] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:37.045 [2024-12-08 14:11:39.862351] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:37.045 [2024-12-08 14:11:39.862358] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:37.045 [2024-12-08 14:11:39.862363] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:37.045 [2024-12-08 14:11:39.862372] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.045 [2024-12-08 14:11:39.862380] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:37.045 [2024-12-08 14:11:39.862388] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.156 ms 00:16:37.045 [2024-12-08 14:11:39.862393] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.045 [2024-12-08 14:11:39.872052] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.045 [2024-12-08 14:11:39.872156] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:37.045 [2024-12-08 14:11:39.872171] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.604 ms 00:16:37.045 [2024-12-08 14:11:39.872177] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.045 [2024-12-08 14:11:39.872355] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.046 [2024-12-08 14:11:39.872366] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:37.046 [2024-12-08 14:11:39.872374] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:16:37.046 [2024-12-08 14:11:39.872380] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.046 [2024-12-08 14:11:39.907392] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.046 [2024-12-08 14:11:39.907486] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:37.046 [2024-12-08 14:11:39.907502] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.046 [2024-12-08 14:11:39.907508] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.046 [2024-12-08 14:11:39.907590] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.046 [2024-12-08 14:11:39.907601] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:37.046 [2024-12-08 14:11:39.907609] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.046 [2024-12-08 14:11:39.907614] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.046 [2024-12-08 14:11:39.907660] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.046 [2024-12-08 14:11:39.907667] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:37.046 [2024-12-08 14:11:39.907674] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.046 [2024-12-08 14:11:39.907680] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.046 [2024-12-08 14:11:39.907705] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.046 [2024-12-08 14:11:39.907712] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:37.046 [2024-12-08 14:11:39.907719] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.046 [2024-12-08 14:11:39.907725] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.304 [2024-12-08 14:11:39.974485] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.304 [2024-12-08 14:11:39.974622] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:37.304 [2024-12-08 14:11:39.974640] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.304 [2024-12-08 14:11:39.974648] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.304 [2024-12-08 14:11:39.997243] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.304 [2024-12-08 14:11:39.997336] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:37.304 [2024-12-08 14:11:39.997351] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.304 [2024-12-08 14:11:39.997358] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.304 [2024-12-08 14:11:39.997417] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.304 [2024-12-08 14:11:39.997425] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:37.304 [2024-12-08 14:11:39.997432] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.304 [2024-12-08 14:11:39.997438] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.304 [2024-12-08 14:11:39.997480] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.304 [2024-12-08 14:11:39.997487] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:37.304 [2024-12-08 14:11:39.997496] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.304 [2024-12-08 14:11:39.997511] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.304 [2024-12-08 14:11:39.997596] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.304 [2024-12-08 14:11:39.997607] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:37.304 [2024-12-08 14:11:39.997616] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.304 [2024-12-08 14:11:39.997622] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.304 [2024-12-08 14:11:39.997660] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.304 [2024-12-08 14:11:39.997670] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:37.304 [2024-12-08 14:11:39.997679] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.304 [2024-12-08 14:11:39.997684] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.304 [2024-12-08 14:11:39.997730] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.304 [2024-12-08 14:11:39.997739] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:37.304 [2024-12-08 14:11:39.997747] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.304 [2024-12-08 14:11:39.997752] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.304 [2024-12-08 14:11:39.997803] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.304 [2024-12-08 14:11:39.997813] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:37.304 [2024-12-08 14:11:39.997822] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.304 [2024-12-08 14:11:39.997828] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.304 [2024-12-08 14:11:39.998019] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 254.315 ms, result 0 00:16:37.304 true 00:16:37.304 14:11:40 -- ftl/trim.sh@63 -- # killprocess 71934 00:16:37.304 14:11:40 -- common/autotest_common.sh@936 -- # '[' -z 71934 ']' 00:16:37.304 14:11:40 -- common/autotest_common.sh@940 -- # kill -0 71934 00:16:37.304 14:11:40 -- common/autotest_common.sh@941 -- # uname 00:16:37.304 14:11:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:37.304 14:11:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71934 00:16:37.304 14:11:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:37.304 14:11:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:37.304 14:11:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71934' 00:16:37.304 killing process with pid 71934 00:16:37.304 14:11:40 -- common/autotest_common.sh@955 -- # kill 71934 00:16:37.304 14:11:40 -- common/autotest_common.sh@960 -- # wait 71934 00:16:43.858 14:11:46 -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:16:44.430 65536+0 records in 00:16:44.430 65536+0 records out 00:16:44.430 268435456 bytes (268 MB, 256 MiB) copied, 1.08433 s, 248 MB/s 00:16:44.430 14:11:47 -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:44.430 [2024-12-08 14:11:47.277082] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:44.430 [2024-12-08 14:11:47.277195] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72120 ] 00:16:44.688 [2024-12-08 14:11:47.424299] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.688 [2024-12-08 14:11:47.562885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.945 [2024-12-08 14:11:47.768817] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:44.945 [2024-12-08 14:11:47.768867] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:45.205 [2024-12-08 14:11:47.912226] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.205 [2024-12-08 14:11:47.912265] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:45.205 [2024-12-08 14:11:47.912275] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:45.205 [2024-12-08 14:11:47.912282] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.205 [2024-12-08 14:11:47.914280] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.205 [2024-12-08 14:11:47.914310] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:45.205 [2024-12-08 14:11:47.914318] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.986 ms 00:16:45.205 [2024-12-08 14:11:47.914324] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.205 [2024-12-08 14:11:47.914382] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:45.205 [2024-12-08 14:11:47.914941] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:45.205 [2024-12-08 14:11:47.914962] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.205 [2024-12-08 14:11:47.914968] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:45.205 [2024-12-08 14:11:47.914975] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.586 ms 00:16:45.205 [2024-12-08 14:11:47.914994] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.205 [2024-12-08 14:11:47.915958] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:45.205 [2024-12-08 14:11:47.925890] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.205 [2024-12-08 14:11:47.926037] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:45.205 [2024-12-08 14:11:47.926051] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.933 ms 00:16:45.205 [2024-12-08 14:11:47.926057] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.205 [2024-12-08 14:11:47.926123] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.205 [2024-12-08 14:11:47.926131] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:45.205 [2024-12-08 14:11:47.926138] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:16:45.205 [2024-12-08 14:11:47.926143] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.205 [2024-12-08 14:11:47.930569] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.205 [2024-12-08 14:11:47.930594] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:45.205 [2024-12-08 14:11:47.930601] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.390 ms 00:16:45.205 [2024-12-08 14:11:47.930610] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.205 [2024-12-08 14:11:47.930689] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.205 [2024-12-08 14:11:47.930697] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:45.205 [2024-12-08 14:11:47.930703] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:16:45.205 [2024-12-08 14:11:47.930709] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.205 [2024-12-08 14:11:47.930725] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.205 [2024-12-08 14:11:47.930731] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:45.205 [2024-12-08 14:11:47.930737] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:45.205 [2024-12-08 14:11:47.930743] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.205 [2024-12-08 14:11:47.930767] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:45.205 [2024-12-08 14:11:47.933519] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.205 [2024-12-08 14:11:47.933710] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:45.205 [2024-12-08 14:11:47.933722] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.762 ms 00:16:45.205 [2024-12-08 14:11:47.933731] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.205 [2024-12-08 14:11:47.933763] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.205 [2024-12-08 14:11:47.933769] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:45.205 [2024-12-08 14:11:47.933775] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:45.205 [2024-12-08 14:11:47.933781] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.205 [2024-12-08 14:11:47.933794] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:45.205 [2024-12-08 14:11:47.933808] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:45.205 [2024-12-08 14:11:47.933833] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:45.205 [2024-12-08 14:11:47.933846] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:45.205 [2024-12-08 14:11:47.933904] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:45.205 [2024-12-08 14:11:47.933912] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:45.205 [2024-12-08 14:11:47.933920] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:45.205 [2024-12-08 14:11:47.933928] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:45.205 [2024-12-08 14:11:47.933934] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:45.205 [2024-12-08 14:11:47.933940] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:45.205 [2024-12-08 14:11:47.933946] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:45.205 [2024-12-08 14:11:47.933951] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:45.205 [2024-12-08 14:11:47.933958] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:45.205 [2024-12-08 14:11:47.933964] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.205 [2024-12-08 14:11:47.933970] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:45.205 [2024-12-08 14:11:47.933975] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.171 ms 00:16:45.205 [2024-12-08 14:11:47.933996] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.205 [2024-12-08 14:11:47.934047] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.205 [2024-12-08 14:11:47.934053] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:45.205 [2024-12-08 14:11:47.934059] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:16:45.205 [2024-12-08 14:11:47.934064] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.205 [2024-12-08 14:11:47.934122] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:45.205 [2024-12-08 14:11:47.934129] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:45.205 [2024-12-08 14:11:47.934135] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:45.205 [2024-12-08 14:11:47.934142] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:45.205 [2024-12-08 14:11:47.934148] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:45.205 [2024-12-08 14:11:47.934153] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:45.205 [2024-12-08 14:11:47.934158] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:45.205 [2024-12-08 14:11:47.934163] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:45.205 [2024-12-08 14:11:47.934169] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:45.205 [2024-12-08 14:11:47.934174] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:45.205 [2024-12-08 14:11:47.934179] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:45.205 [2024-12-08 14:11:47.934184] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:45.206 [2024-12-08 14:11:47.934189] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:45.206 [2024-12-08 14:11:47.934196] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:45.206 [2024-12-08 14:11:47.934205] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:45.206 [2024-12-08 14:11:47.934210] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:45.206 [2024-12-08 14:11:47.934215] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:45.206 [2024-12-08 14:11:47.934220] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:45.206 [2024-12-08 14:11:47.934225] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:45.206 [2024-12-08 14:11:47.934230] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:45.206 [2024-12-08 14:11:47.934235] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:45.206 [2024-12-08 14:11:47.934240] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:45.206 [2024-12-08 14:11:47.934245] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:45.206 [2024-12-08 14:11:47.934250] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:45.206 [2024-12-08 14:11:47.934255] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:45.206 [2024-12-08 14:11:47.934260] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:45.206 [2024-12-08 14:11:47.934265] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:45.206 [2024-12-08 14:11:47.934269] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:45.206 [2024-12-08 14:11:47.934274] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:45.206 [2024-12-08 14:11:47.934279] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:45.206 [2024-12-08 14:11:47.934284] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:45.206 [2024-12-08 14:11:47.934288] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:45.206 [2024-12-08 14:11:47.934293] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:45.206 [2024-12-08 14:11:47.934298] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:45.206 [2024-12-08 14:11:47.934302] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:45.206 [2024-12-08 14:11:47.934307] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:45.206 [2024-12-08 14:11:47.934312] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:45.206 [2024-12-08 14:11:47.934317] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:45.206 [2024-12-08 14:11:47.934321] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:45.206 [2024-12-08 14:11:47.934326] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:45.206 [2024-12-08 14:11:47.934331] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:45.206 [2024-12-08 14:11:47.934337] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:45.206 [2024-12-08 14:11:47.934342] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:45.206 [2024-12-08 14:11:47.934350] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:45.206 [2024-12-08 14:11:47.934356] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:45.206 [2024-12-08 14:11:47.934362] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:45.206 [2024-12-08 14:11:47.934367] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:45.206 [2024-12-08 14:11:47.934373] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:45.206 [2024-12-08 14:11:47.934377] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:45.206 [2024-12-08 14:11:47.934383] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:45.206 [2024-12-08 14:11:47.934388] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:45.206 [2024-12-08 14:11:47.934395] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:45.206 [2024-12-08 14:11:47.934401] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:45.206 [2024-12-08 14:11:47.934407] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:45.206 [2024-12-08 14:11:47.934412] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:45.206 [2024-12-08 14:11:47.934417] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:45.206 [2024-12-08 14:11:47.934423] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:45.206 [2024-12-08 14:11:47.934428] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:45.206 [2024-12-08 14:11:47.934434] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:45.206 [2024-12-08 14:11:47.934439] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:45.206 [2024-12-08 14:11:47.934445] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:45.206 [2024-12-08 14:11:47.934450] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:45.206 [2024-12-08 14:11:47.934455] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:45.206 [2024-12-08 14:11:47.934461] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:45.206 [2024-12-08 14:11:47.934466] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:45.206 [2024-12-08 14:11:47.934471] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:45.206 [2024-12-08 14:11:47.934481] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:45.206 [2024-12-08 14:11:47.934487] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:45.206 [2024-12-08 14:11:47.934492] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:45.206 [2024-12-08 14:11:47.934497] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:45.206 [2024-12-08 14:11:47.934502] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:45.206 [2024-12-08 14:11:47.934508] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.206 [2024-12-08 14:11:47.934514] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:45.206 [2024-12-08 14:11:47.934520] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:16:45.206 [2024-12-08 14:11:47.934525] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.206 [2024-12-08 14:11:47.946416] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.206 [2024-12-08 14:11:47.946445] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:45.206 [2024-12-08 14:11:47.946454] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.859 ms 00:16:45.206 [2024-12-08 14:11:47.946459] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.206 [2024-12-08 14:11:47.946548] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.206 [2024-12-08 14:11:47.946555] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:45.206 [2024-12-08 14:11:47.946561] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:16:45.206 [2024-12-08 14:11:47.946566] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.206 [2024-12-08 14:11:47.982047] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.206 [2024-12-08 14:11:47.982161] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:45.206 [2024-12-08 14:11:47.982175] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.464 ms 00:16:45.206 [2024-12-08 14:11:47.982183] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.206 [2024-12-08 14:11:47.982241] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.206 [2024-12-08 14:11:47.982250] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:45.206 [2024-12-08 14:11:47.982260] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:45.206 [2024-12-08 14:11:47.982266] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.206 [2024-12-08 14:11:47.982544] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.206 [2024-12-08 14:11:47.982564] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:45.206 [2024-12-08 14:11:47.982571] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:16:45.206 [2024-12-08 14:11:47.982577] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.206 [2024-12-08 14:11:47.982673] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.206 [2024-12-08 14:11:47.982680] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:45.206 [2024-12-08 14:11:47.982687] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:16:45.206 [2024-12-08 14:11:47.982692] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.206 [2024-12-08 14:11:47.994103] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.206 [2024-12-08 14:11:47.994128] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:45.206 [2024-12-08 14:11:47.994136] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.393 ms 00:16:45.206 [2024-12-08 14:11:47.994143] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.206 [2024-12-08 14:11:48.004000] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:16:45.206 [2024-12-08 14:11:48.004031] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:45.206 [2024-12-08 14:11:48.004039] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.206 [2024-12-08 14:11:48.004045] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:45.206 [2024-12-08 14:11:48.004052] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.821 ms 00:16:45.206 [2024-12-08 14:11:48.004057] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.206 [2024-12-08 14:11:48.022846] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.206 [2024-12-08 14:11:48.022875] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:45.206 [2024-12-08 14:11:48.022887] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.742 ms 00:16:45.206 [2024-12-08 14:11:48.022894] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.207 [2024-12-08 14:11:48.031929] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.207 [2024-12-08 14:11:48.031957] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:45.207 [2024-12-08 14:11:48.031965] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.982 ms 00:16:45.207 [2024-12-08 14:11:48.031976] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.207 [2024-12-08 14:11:48.040876] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.207 [2024-12-08 14:11:48.040902] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:45.207 [2024-12-08 14:11:48.040909] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.845 ms 00:16:45.207 [2024-12-08 14:11:48.040914] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.207 [2024-12-08 14:11:48.041196] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.207 [2024-12-08 14:11:48.041219] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:45.207 [2024-12-08 14:11:48.041226] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.222 ms 00:16:45.207 [2024-12-08 14:11:48.041232] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.207 [2024-12-08 14:11:48.086828] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.207 [2024-12-08 14:11:48.086861] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:45.207 [2024-12-08 14:11:48.086872] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.577 ms 00:16:45.207 [2024-12-08 14:11:48.086879] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.207 [2024-12-08 14:11:48.094711] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:45.207 [2024-12-08 14:11:48.106134] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.207 [2024-12-08 14:11:48.106163] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:45.207 [2024-12-08 14:11:48.106173] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.191 ms 00:16:45.207 [2024-12-08 14:11:48.106179] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.207 [2024-12-08 14:11:48.106228] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.207 [2024-12-08 14:11:48.106235] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:45.207 [2024-12-08 14:11:48.106243] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:45.207 [2024-12-08 14:11:48.106251] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.207 [2024-12-08 14:11:48.106285] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.207 [2024-12-08 14:11:48.106293] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:45.207 [2024-12-08 14:11:48.106299] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:16:45.207 [2024-12-08 14:11:48.106304] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.207 [2024-12-08 14:11:48.107230] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.207 [2024-12-08 14:11:48.107298] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:45.207 [2024-12-08 14:11:48.107305] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.909 ms 00:16:45.207 [2024-12-08 14:11:48.107311] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.207 [2024-12-08 14:11:48.107335] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.207 [2024-12-08 14:11:48.107341] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:45.207 [2024-12-08 14:11:48.107350] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:45.207 [2024-12-08 14:11:48.107355] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.207 [2024-12-08 14:11:48.107380] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:45.207 [2024-12-08 14:11:48.107387] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.207 [2024-12-08 14:11:48.107393] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:45.207 [2024-12-08 14:11:48.107398] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:45.207 [2024-12-08 14:11:48.107404] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.467 [2024-12-08 14:11:48.125816] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.467 [2024-12-08 14:11:48.125853] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:45.467 [2024-12-08 14:11:48.125862] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.395 ms 00:16:45.467 [2024-12-08 14:11:48.125868] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.467 [2024-12-08 14:11:48.125945] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.467 [2024-12-08 14:11:48.125953] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:45.467 [2024-12-08 14:11:48.125960] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:16:45.467 [2024-12-08 14:11:48.125965] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.467 [2024-12-08 14:11:48.126598] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:45.467 [2024-12-08 14:11:48.129093] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 214.154 ms, result 0 00:16:45.467 [2024-12-08 14:11:48.129897] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:45.467 [2024-12-08 14:11:48.145007] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:46.407  [2024-12-08T14:11:50.261Z] Copying: 24/256 [MB] (24 MBps) [2024-12-08T14:11:51.198Z] Copying: 67/256 [MB] (42 MBps) [2024-12-08T14:11:52.567Z] Copying: 93/256 [MB] (26 MBps) [2024-12-08T14:11:53.504Z] Copying: 132/256 [MB] (39 MBps) [2024-12-08T14:11:54.440Z] Copying: 157/256 [MB] (24 MBps) [2024-12-08T14:11:55.382Z] Copying: 175/256 [MB] (18 MBps) [2024-12-08T14:11:56.336Z] Copying: 194/256 [MB] (18 MBps) [2024-12-08T14:11:57.277Z] Copying: 212/256 [MB] (18 MBps) [2024-12-08T14:11:58.220Z] Copying: 230/256 [MB] (17 MBps) [2024-12-08T14:11:58.790Z] Copying: 247/256 [MB] (16 MBps) [2024-12-08T14:11:58.790Z] Copying: 256/256 [MB] (average 24 MBps)[2024-12-08 14:11:58.675244] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:55.870 [2024-12-08 14:11:58.685339] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.870 [2024-12-08 14:11:58.685388] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:55.870 [2024-12-08 14:11:58.685413] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:55.870 [2024-12-08 14:11:58.685421] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.870 [2024-12-08 14:11:58.685445] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:55.870 [2024-12-08 14:11:58.688478] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.870 [2024-12-08 14:11:58.688527] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:55.870 [2024-12-08 14:11:58.688538] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.017 ms 00:16:55.870 [2024-12-08 14:11:58.688546] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.870 [2024-12-08 14:11:58.691700] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.870 [2024-12-08 14:11:58.691746] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:55.870 [2024-12-08 14:11:58.691757] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.125 ms 00:16:55.870 [2024-12-08 14:11:58.691765] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.870 [2024-12-08 14:11:58.700936] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.870 [2024-12-08 14:11:58.700998] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:55.870 [2024-12-08 14:11:58.701009] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.144 ms 00:16:55.870 [2024-12-08 14:11:58.701017] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.870 [2024-12-08 14:11:58.707915] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.870 [2024-12-08 14:11:58.707958] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:55.870 [2024-12-08 14:11:58.707970] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.837 ms 00:16:55.870 [2024-12-08 14:11:58.707978] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.870 [2024-12-08 14:11:58.734306] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.870 [2024-12-08 14:11:58.734354] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:55.870 [2024-12-08 14:11:58.734366] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.240 ms 00:16:55.870 [2024-12-08 14:11:58.734374] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.870 [2024-12-08 14:11:58.751283] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.870 [2024-12-08 14:11:58.751478] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:55.870 [2024-12-08 14:11:58.751502] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.847 ms 00:16:55.870 [2024-12-08 14:11:58.751510] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.871 [2024-12-08 14:11:58.751709] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.871 [2024-12-08 14:11:58.751722] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:55.871 [2024-12-08 14:11:58.751731] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:16:55.871 [2024-12-08 14:11:58.751739] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.871 [2024-12-08 14:11:58.772130] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.871 [2024-12-08 14:11:58.772170] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:55.871 [2024-12-08 14:11:58.772179] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.374 ms 00:16:55.871 [2024-12-08 14:11:58.772185] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.132 [2024-12-08 14:11:58.791542] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.132 [2024-12-08 14:11:58.791577] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:56.132 [2024-12-08 14:11:58.791585] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.306 ms 00:16:56.132 [2024-12-08 14:11:58.791591] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.132 [2024-12-08 14:11:58.810510] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.132 [2024-12-08 14:11:58.810644] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:56.132 [2024-12-08 14:11:58.810659] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.872 ms 00:16:56.132 [2024-12-08 14:11:58.810664] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.132 [2024-12-08 14:11:58.828596] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.132 [2024-12-08 14:11:58.828624] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:56.132 [2024-12-08 14:11:58.828632] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.872 ms 00:16:56.132 [2024-12-08 14:11:58.828637] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.132 [2024-12-08 14:11:58.828675] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:56.132 [2024-12-08 14:11:58.828686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.828694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.828699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.828705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.828711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.828717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.828722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.828727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.828733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.828738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.828744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.828749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.828754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.828760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.828765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.828771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.828777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.828782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.828788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.828793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.828799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.828804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.828810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.828816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.828821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.828826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.828833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.828838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.828844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.828851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.828856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.828862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.828868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.828873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.828879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.828884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.828890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.828895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.828901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.828906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.828912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.828917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.828923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.828928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.828934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.828939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.828944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.828950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.828956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.828962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.828967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.828972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.828978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.829000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.829006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.829012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.829018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.829024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.829029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.829035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.829041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.829050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.829055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.829061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.829067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.829073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.829078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.829084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.829090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.829095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:56.132 [2024-12-08 14:11:58.829101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:56.133 [2024-12-08 14:11:58.829106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:56.133 [2024-12-08 14:11:58.829112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:56.133 [2024-12-08 14:11:58.829117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:56.133 [2024-12-08 14:11:58.829123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:56.133 [2024-12-08 14:11:58.829128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:56.133 [2024-12-08 14:11:58.829134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:56.133 [2024-12-08 14:11:58.829139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:56.133 [2024-12-08 14:11:58.829146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:56.133 [2024-12-08 14:11:58.829174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:56.133 [2024-12-08 14:11:58.829180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:56.133 [2024-12-08 14:11:58.829185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:56.133 [2024-12-08 14:11:58.829191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:56.133 [2024-12-08 14:11:58.829204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:56.133 [2024-12-08 14:11:58.829210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:56.133 [2024-12-08 14:11:58.829216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:56.133 [2024-12-08 14:11:58.829221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:56.133 [2024-12-08 14:11:58.829227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:56.133 [2024-12-08 14:11:58.829233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:56.133 [2024-12-08 14:11:58.829238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:56.133 [2024-12-08 14:11:58.829244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:56.133 [2024-12-08 14:11:58.829249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:56.133 [2024-12-08 14:11:58.829255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:56.133 [2024-12-08 14:11:58.829264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:56.133 [2024-12-08 14:11:58.829270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:56.133 [2024-12-08 14:11:58.829281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:56.133 [2024-12-08 14:11:58.829286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:56.133 [2024-12-08 14:11:58.829298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:56.133 [2024-12-08 14:11:58.829304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:56.133 [2024-12-08 14:11:58.829309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:56.133 [2024-12-08 14:11:58.829321] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:56.133 [2024-12-08 14:11:58.829327] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1c62f382-c384-45a2-bc0e-3a3545a6a62f 00:16:56.133 [2024-12-08 14:11:58.829334] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:56.133 [2024-12-08 14:11:58.829339] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:56.133 [2024-12-08 14:11:58.829345] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:56.133 [2024-12-08 14:11:58.829351] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:56.133 [2024-12-08 14:11:58.829356] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:56.133 [2024-12-08 14:11:58.829362] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:56.133 [2024-12-08 14:11:58.829369] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:56.133 [2024-12-08 14:11:58.829374] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:56.133 [2024-12-08 14:11:58.829379] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:56.133 [2024-12-08 14:11:58.829384] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.133 [2024-12-08 14:11:58.829390] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:56.133 [2024-12-08 14:11:58.829397] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.710 ms 00:16:56.133 [2024-12-08 14:11:58.829402] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.133 [2024-12-08 14:11:58.838893] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.133 [2024-12-08 14:11:58.838919] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:56.133 [2024-12-08 14:11:58.838926] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.476 ms 00:16:56.133 [2024-12-08 14:11:58.838936] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.133 [2024-12-08 14:11:58.839116] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.133 [2024-12-08 14:11:58.839123] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:56.133 [2024-12-08 14:11:58.839129] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:16:56.133 [2024-12-08 14:11:58.839135] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.133 [2024-12-08 14:11:58.868761] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.133 [2024-12-08 14:11:58.868871] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:56.133 [2024-12-08 14:11:58.868883] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.133 [2024-12-08 14:11:58.868893] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.133 [2024-12-08 14:11:58.868957] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.133 [2024-12-08 14:11:58.868964] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:56.133 [2024-12-08 14:11:58.868970] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.133 [2024-12-08 14:11:58.868975] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.133 [2024-12-08 14:11:58.869017] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.133 [2024-12-08 14:11:58.869025] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:56.133 [2024-12-08 14:11:58.869030] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.133 [2024-12-08 14:11:58.869036] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.133 [2024-12-08 14:11:58.869051] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.133 [2024-12-08 14:11:58.869057] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:56.133 [2024-12-08 14:11:58.869063] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.133 [2024-12-08 14:11:58.869068] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.133 [2024-12-08 14:11:58.926509] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.133 [2024-12-08 14:11:58.926537] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:56.133 [2024-12-08 14:11:58.926544] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.133 [2024-12-08 14:11:58.926553] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.133 [2024-12-08 14:11:58.949188] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.133 [2024-12-08 14:11:58.949218] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:56.133 [2024-12-08 14:11:58.949226] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.133 [2024-12-08 14:11:58.949232] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.133 [2024-12-08 14:11:58.949271] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.133 [2024-12-08 14:11:58.949279] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:56.133 [2024-12-08 14:11:58.949285] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.133 [2024-12-08 14:11:58.949291] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.133 [2024-12-08 14:11:58.949314] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.133 [2024-12-08 14:11:58.949323] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:56.133 [2024-12-08 14:11:58.949329] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.133 [2024-12-08 14:11:58.949335] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.133 [2024-12-08 14:11:58.949406] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.133 [2024-12-08 14:11:58.949413] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:56.133 [2024-12-08 14:11:58.949419] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.133 [2024-12-08 14:11:58.949424] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.133 [2024-12-08 14:11:58.949448] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.133 [2024-12-08 14:11:58.949456] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:56.133 [2024-12-08 14:11:58.949462] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.133 [2024-12-08 14:11:58.949467] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.133 [2024-12-08 14:11:58.949496] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.133 [2024-12-08 14:11:58.949502] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:56.133 [2024-12-08 14:11:58.949508] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.133 [2024-12-08 14:11:58.949514] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.133 [2024-12-08 14:11:58.949546] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.133 [2024-12-08 14:11:58.949555] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:56.133 [2024-12-08 14:11:58.949563] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.133 [2024-12-08 14:11:58.949568] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.133 [2024-12-08 14:11:58.949672] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 264.355 ms, result 0 00:16:57.069 00:16:57.069 00:16:57.069 14:11:59 -- ftl/trim.sh@72 -- # svcpid=72259 00:16:57.069 14:11:59 -- ftl/trim.sh@73 -- # waitforlisten 72259 00:16:57.069 14:11:59 -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:16:57.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.069 14:11:59 -- common/autotest_common.sh@829 -- # '[' -z 72259 ']' 00:16:57.069 14:11:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.069 14:11:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:57.069 14:11:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.069 14:11:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:57.069 14:11:59 -- common/autotest_common.sh@10 -- # set +x 00:16:57.069 [2024-12-08 14:11:59.943204] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:57.069 [2024-12-08 14:11:59.943314] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72259 ] 00:16:57.328 [2024-12-08 14:12:00.088920] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.328 [2024-12-08 14:12:00.234929] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:57.328 [2024-12-08 14:12:00.235097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.896 14:12:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:57.896 14:12:00 -- common/autotest_common.sh@862 -- # return 0 00:16:57.896 14:12:00 -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:16:58.156 [2024-12-08 14:12:00.942714] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:58.156 [2024-12-08 14:12:00.942760] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:58.418 [2024-12-08 14:12:01.099879] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.418 [2024-12-08 14:12:01.100059] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:58.418 [2024-12-08 14:12:01.100084] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:58.418 [2024-12-08 14:12:01.100093] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.418 [2024-12-08 14:12:01.102736] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.418 [2024-12-08 14:12:01.102772] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:58.418 [2024-12-08 14:12:01.102784] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.620 ms 00:16:58.418 [2024-12-08 14:12:01.102792] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.418 [2024-12-08 14:12:01.102867] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:58.419 [2024-12-08 14:12:01.103584] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:58.419 [2024-12-08 14:12:01.103693] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.419 [2024-12-08 14:12:01.103703] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:58.419 [2024-12-08 14:12:01.103713] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.831 ms 00:16:58.419 [2024-12-08 14:12:01.103721] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.419 [2024-12-08 14:12:01.104848] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:58.419 [2024-12-08 14:12:01.117854] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.419 [2024-12-08 14:12:01.117893] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:58.419 [2024-12-08 14:12:01.117905] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.011 ms 00:16:58.419 [2024-12-08 14:12:01.117914] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.419 [2024-12-08 14:12:01.118004] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.419 [2024-12-08 14:12:01.118016] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:58.419 [2024-12-08 14:12:01.118025] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:16:58.419 [2024-12-08 14:12:01.118034] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.419 [2024-12-08 14:12:01.123290] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.419 [2024-12-08 14:12:01.123431] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:58.419 [2024-12-08 14:12:01.123493] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.207 ms 00:16:58.419 [2024-12-08 14:12:01.123519] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.419 [2024-12-08 14:12:01.123673] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.419 [2024-12-08 14:12:01.123714] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:58.419 [2024-12-08 14:12:01.123735] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:16:58.419 [2024-12-08 14:12:01.123756] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.419 [2024-12-08 14:12:01.123795] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.419 [2024-12-08 14:12:01.123937] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:58.419 [2024-12-08 14:12:01.123962] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:58.419 [2024-12-08 14:12:01.123995] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.419 [2024-12-08 14:12:01.124037] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:58.419 [2024-12-08 14:12:01.127531] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.419 [2024-12-08 14:12:01.127647] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:58.419 [2024-12-08 14:12:01.127703] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.501 ms 00:16:58.419 [2024-12-08 14:12:01.128107] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.419 [2024-12-08 14:12:01.128593] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.419 [2024-12-08 14:12:01.128685] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:58.419 [2024-12-08 14:12:01.128742] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:16:58.419 [2024-12-08 14:12:01.128768] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.419 [2024-12-08 14:12:01.128827] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:58.419 [2024-12-08 14:12:01.128886] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:58.419 [2024-12-08 14:12:01.128972] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:58.419 [2024-12-08 14:12:01.129032] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:58.419 [2024-12-08 14:12:01.129130] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:58.419 [2024-12-08 14:12:01.129258] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:58.419 [2024-12-08 14:12:01.129299] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:58.419 [2024-12-08 14:12:01.129355] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:58.419 [2024-12-08 14:12:01.129389] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:58.419 [2024-12-08 14:12:01.129437] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:58.419 [2024-12-08 14:12:01.129460] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:58.419 [2024-12-08 14:12:01.129479] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:58.419 [2024-12-08 14:12:01.129501] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:58.419 [2024-12-08 14:12:01.129521] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.419 [2024-12-08 14:12:01.129541] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:58.419 [2024-12-08 14:12:01.129561] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.698 ms 00:16:58.419 [2024-12-08 14:12:01.129580] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.419 [2024-12-08 14:12:01.129662] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.419 [2024-12-08 14:12:01.129685] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:58.419 [2024-12-08 14:12:01.129704] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:16:58.419 [2024-12-08 14:12:01.129723] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.419 [2024-12-08 14:12:01.129812] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:58.419 [2024-12-08 14:12:01.129883] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:58.419 [2024-12-08 14:12:01.129906] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:58.419 [2024-12-08 14:12:01.129928] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:58.419 [2024-12-08 14:12:01.129947] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:58.419 [2024-12-08 14:12:01.129967] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:58.419 [2024-12-08 14:12:01.130021] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:58.419 [2024-12-08 14:12:01.130046] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:58.419 [2024-12-08 14:12:01.130064] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:58.419 [2024-12-08 14:12:01.130102] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:58.419 [2024-12-08 14:12:01.130125] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:58.419 [2024-12-08 14:12:01.130144] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:58.419 [2024-12-08 14:12:01.130162] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:58.419 [2024-12-08 14:12:01.130181] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:58.419 [2024-12-08 14:12:01.130199] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:58.419 [2024-12-08 14:12:01.130266] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:58.419 [2024-12-08 14:12:01.130286] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:58.419 [2024-12-08 14:12:01.130306] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:58.419 [2024-12-08 14:12:01.130324] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:58.419 [2024-12-08 14:12:01.130363] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:58.419 [2024-12-08 14:12:01.130383] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:58.419 [2024-12-08 14:12:01.130404] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:58.419 [2024-12-08 14:12:01.130422] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:58.419 [2024-12-08 14:12:01.130443] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:58.419 [2024-12-08 14:12:01.130461] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:58.419 [2024-12-08 14:12:01.130490] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:58.419 [2024-12-08 14:12:01.130508] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:58.419 [2024-12-08 14:12:01.130528] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:58.419 [2024-12-08 14:12:01.130545] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:58.419 [2024-12-08 14:12:01.130564] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:58.419 [2024-12-08 14:12:01.130608] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:58.419 [2024-12-08 14:12:01.130632] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:58.419 [2024-12-08 14:12:01.130693] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:58.419 [2024-12-08 14:12:01.130718] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:58.419 [2024-12-08 14:12:01.130792] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:58.419 [2024-12-08 14:12:01.130816] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:58.419 [2024-12-08 14:12:01.130834] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:58.419 [2024-12-08 14:12:01.130877] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:58.419 [2024-12-08 14:12:01.130902] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:58.419 [2024-12-08 14:12:01.130928] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:58.419 [2024-12-08 14:12:01.130951] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:58.419 [2024-12-08 14:12:01.130992] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:58.419 [2024-12-08 14:12:01.131020] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:58.419 [2024-12-08 14:12:01.131048] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:58.419 [2024-12-08 14:12:01.131072] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:58.419 [2024-12-08 14:12:01.131098] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:58.419 [2024-12-08 14:12:01.131124] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:58.419 [2024-12-08 14:12:01.131144] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:58.420 [2024-12-08 14:12:01.131162] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:58.420 [2024-12-08 14:12:01.131222] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:58.420 [2024-12-08 14:12:01.131249] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:58.420 [2024-12-08 14:12:01.131285] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:58.420 [2024-12-08 14:12:01.131345] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:58.420 [2024-12-08 14:12:01.131376] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:58.420 [2024-12-08 14:12:01.131427] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:58.420 [2024-12-08 14:12:01.131462] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:58.420 [2024-12-08 14:12:01.131512] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:58.420 [2024-12-08 14:12:01.131544] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:58.420 [2024-12-08 14:12:01.131637] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:58.420 [2024-12-08 14:12:01.131649] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:58.420 [2024-12-08 14:12:01.131656] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:58.420 [2024-12-08 14:12:01.131664] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:58.420 [2024-12-08 14:12:01.131671] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:58.420 [2024-12-08 14:12:01.131680] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:58.420 [2024-12-08 14:12:01.131688] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:58.420 [2024-12-08 14:12:01.131696] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:58.420 [2024-12-08 14:12:01.131704] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:58.420 [2024-12-08 14:12:01.131713] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:58.420 [2024-12-08 14:12:01.131720] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:58.420 [2024-12-08 14:12:01.131729] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:58.420 [2024-12-08 14:12:01.131735] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:58.420 [2024-12-08 14:12:01.131747] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.420 [2024-12-08 14:12:01.131754] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:58.420 [2024-12-08 14:12:01.131763] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.975 ms 00:16:58.420 [2024-12-08 14:12:01.131771] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.420 [2024-12-08 14:12:01.147163] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.420 [2024-12-08 14:12:01.147201] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:58.420 [2024-12-08 14:12:01.147215] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.336 ms 00:16:58.420 [2024-12-08 14:12:01.147225] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.420 [2024-12-08 14:12:01.147346] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.420 [2024-12-08 14:12:01.147356] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:58.420 [2024-12-08 14:12:01.147365] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:16:58.420 [2024-12-08 14:12:01.147372] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.420 [2024-12-08 14:12:01.179137] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.420 [2024-12-08 14:12:01.179175] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:58.420 [2024-12-08 14:12:01.179188] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.742 ms 00:16:58.420 [2024-12-08 14:12:01.179195] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.420 [2024-12-08 14:12:01.179257] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.420 [2024-12-08 14:12:01.179269] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:58.420 [2024-12-08 14:12:01.179279] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:58.420 [2024-12-08 14:12:01.179286] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.420 [2024-12-08 14:12:01.179674] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.420 [2024-12-08 14:12:01.179706] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:58.420 [2024-12-08 14:12:01.179720] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.364 ms 00:16:58.420 [2024-12-08 14:12:01.179727] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.420 [2024-12-08 14:12:01.179851] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.420 [2024-12-08 14:12:01.179860] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:58.420 [2024-12-08 14:12:01.179871] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:16:58.420 [2024-12-08 14:12:01.179878] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.420 [2024-12-08 14:12:01.196295] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.420 [2024-12-08 14:12:01.196333] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:58.420 [2024-12-08 14:12:01.196349] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.393 ms 00:16:58.420 [2024-12-08 14:12:01.196357] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.420 [2024-12-08 14:12:01.210103] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:58.420 [2024-12-08 14:12:01.210265] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:58.420 [2024-12-08 14:12:01.210285] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.420 [2024-12-08 14:12:01.210294] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:58.420 [2024-12-08 14:12:01.210306] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.813 ms 00:16:58.420 [2024-12-08 14:12:01.210313] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.420 [2024-12-08 14:12:01.236280] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.420 [2024-12-08 14:12:01.236330] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:58.420 [2024-12-08 14:12:01.236345] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.881 ms 00:16:58.420 [2024-12-08 14:12:01.236353] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.420 [2024-12-08 14:12:01.250137] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.420 [2024-12-08 14:12:01.250194] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:58.420 [2024-12-08 14:12:01.250209] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.686 ms 00:16:58.420 [2024-12-08 14:12:01.250217] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.420 [2024-12-08 14:12:01.263348] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.420 [2024-12-08 14:12:01.263395] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:58.420 [2024-12-08 14:12:01.263413] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.033 ms 00:16:58.420 [2024-12-08 14:12:01.263420] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.420 [2024-12-08 14:12:01.263829] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.420 [2024-12-08 14:12:01.263842] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:58.420 [2024-12-08 14:12:01.263857] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:16:58.421 [2024-12-08 14:12:01.263864] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.421 [2024-12-08 14:12:01.332755] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.421 [2024-12-08 14:12:01.332821] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:58.421 [2024-12-08 14:12:01.332844] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.863 ms 00:16:58.421 [2024-12-08 14:12:01.332854] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.681 [2024-12-08 14:12:01.344622] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:58.681 [2024-12-08 14:12:01.364208] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.681 [2024-12-08 14:12:01.364268] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:58.681 [2024-12-08 14:12:01.364281] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.242 ms 00:16:58.681 [2024-12-08 14:12:01.364292] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.681 [2024-12-08 14:12:01.364375] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.681 [2024-12-08 14:12:01.364390] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:58.681 [2024-12-08 14:12:01.364401] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:58.681 [2024-12-08 14:12:01.364414] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.681 [2024-12-08 14:12:01.364470] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.681 [2024-12-08 14:12:01.364482] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:58.681 [2024-12-08 14:12:01.364490] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:16:58.681 [2024-12-08 14:12:01.364500] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.681 [2024-12-08 14:12:01.365939] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.681 [2024-12-08 14:12:01.366011] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:58.681 [2024-12-08 14:12:01.366023] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.413 ms 00:16:58.681 [2024-12-08 14:12:01.366033] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.681 [2024-12-08 14:12:01.366076] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.681 [2024-12-08 14:12:01.366090] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:58.681 [2024-12-08 14:12:01.366098] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:58.681 [2024-12-08 14:12:01.366108] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.681 [2024-12-08 14:12:01.366149] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:58.681 [2024-12-08 14:12:01.366164] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.681 [2024-12-08 14:12:01.366172] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:58.681 [2024-12-08 14:12:01.366182] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:58.681 [2024-12-08 14:12:01.366190] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.681 [2024-12-08 14:12:01.393210] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.681 [2024-12-08 14:12:01.393277] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:58.681 [2024-12-08 14:12:01.393294] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.978 ms 00:16:58.681 [2024-12-08 14:12:01.393302] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.682 [2024-12-08 14:12:01.393432] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.682 [2024-12-08 14:12:01.393443] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:58.682 [2024-12-08 14:12:01.393455] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:16:58.682 [2024-12-08 14:12:01.393466] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.682 [2024-12-08 14:12:01.394585] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:58.682 [2024-12-08 14:12:01.398354] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 294.369 ms, result 0 00:16:58.682 [2024-12-08 14:12:01.400102] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:58.682 Some configs were skipped because the RPC state that can call them passed over. 00:16:58.682 14:12:01 -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:16:58.942 [2024-12-08 14:12:01.654155] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.942 [2024-12-08 14:12:01.654377] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:16:58.942 [2024-12-08 14:12:01.654450] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.115 ms 00:16:58.942 [2024-12-08 14:12:01.654479] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.942 [2024-12-08 14:12:01.654542] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 28.503 ms, result 0 00:16:58.942 true 00:16:58.942 14:12:01 -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:16:59.202 [2024-12-08 14:12:01.886317] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.202 [2024-12-08 14:12:01.886551] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:16:59.202 [2024-12-08 14:12:01.886629] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.728 ms 00:16:59.202 [2024-12-08 14:12:01.886642] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.202 [2024-12-08 14:12:01.886695] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 28.105 ms, result 0 00:16:59.202 true 00:16:59.202 14:12:01 -- ftl/trim.sh@81 -- # killprocess 72259 00:16:59.202 14:12:01 -- common/autotest_common.sh@936 -- # '[' -z 72259 ']' 00:16:59.202 14:12:01 -- common/autotest_common.sh@940 -- # kill -0 72259 00:16:59.202 14:12:01 -- common/autotest_common.sh@941 -- # uname 00:16:59.202 14:12:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:59.202 14:12:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72259 00:16:59.202 killing process with pid 72259 00:16:59.202 14:12:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:59.202 14:12:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:59.202 14:12:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72259' 00:16:59.202 14:12:01 -- common/autotest_common.sh@955 -- # kill 72259 00:16:59.202 14:12:01 -- common/autotest_common.sh@960 -- # wait 72259 00:16:59.771 [2024-12-08 14:12:02.522265] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.771 [2024-12-08 14:12:02.522312] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:59.771 [2024-12-08 14:12:02.522322] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:16:59.771 [2024-12-08 14:12:02.522329] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.771 [2024-12-08 14:12:02.522349] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:59.771 [2024-12-08 14:12:02.524383] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.771 [2024-12-08 14:12:02.524408] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:59.771 [2024-12-08 14:12:02.524419] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.021 ms 00:16:59.771 [2024-12-08 14:12:02.524426] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.771 [2024-12-08 14:12:02.524673] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.771 [2024-12-08 14:12:02.524681] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:59.771 [2024-12-08 14:12:02.524689] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.213 ms 00:16:59.771 [2024-12-08 14:12:02.524694] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.771 [2024-12-08 14:12:02.527863] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.771 [2024-12-08 14:12:02.527886] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:59.771 [2024-12-08 14:12:02.527896] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.152 ms 00:16:59.771 [2024-12-08 14:12:02.527902] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.771 [2024-12-08 14:12:02.533208] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.771 [2024-12-08 14:12:02.533348] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:59.771 [2024-12-08 14:12:02.533365] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.278 ms 00:16:59.771 [2024-12-08 14:12:02.533371] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.771 [2024-12-08 14:12:02.540854] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.771 [2024-12-08 14:12:02.540953] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:59.771 [2024-12-08 14:12:02.540970] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.421 ms 00:16:59.771 [2024-12-08 14:12:02.540975] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.771 [2024-12-08 14:12:02.547850] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.771 [2024-12-08 14:12:02.547951] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:59.771 [2024-12-08 14:12:02.547964] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.835 ms 00:16:59.771 [2024-12-08 14:12:02.547970] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.771 [2024-12-08 14:12:02.548090] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.771 [2024-12-08 14:12:02.548099] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:59.771 [2024-12-08 14:12:02.548107] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:16:59.771 [2024-12-08 14:12:02.548112] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.771 [2024-12-08 14:12:02.555950] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.771 [2024-12-08 14:12:02.555976] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:59.771 [2024-12-08 14:12:02.555991] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.820 ms 00:16:59.771 [2024-12-08 14:12:02.555997] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.771 [2024-12-08 14:12:02.563438] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.771 [2024-12-08 14:12:02.563463] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:59.771 [2024-12-08 14:12:02.563475] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.410 ms 00:16:59.771 [2024-12-08 14:12:02.563480] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.771 [2024-12-08 14:12:02.570708] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.771 [2024-12-08 14:12:02.570803] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:59.771 [2024-12-08 14:12:02.570817] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.197 ms 00:16:59.771 [2024-12-08 14:12:02.570822] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.771 [2024-12-08 14:12:02.578191] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.771 [2024-12-08 14:12:02.578277] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:59.771 [2024-12-08 14:12:02.578324] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.319 ms 00:16:59.771 [2024-12-08 14:12:02.578342] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.771 [2024-12-08 14:12:02.578383] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:59.771 [2024-12-08 14:12:02.578405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:59.771 [2024-12-08 14:12:02.578432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:59.771 [2024-12-08 14:12:02.578455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:59.771 [2024-12-08 14:12:02.578477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:59.771 [2024-12-08 14:12:02.578529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:59.771 [2024-12-08 14:12:02.578733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:59.771 [2024-12-08 14:12:02.578760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:59.771 [2024-12-08 14:12:02.578816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:59.771 [2024-12-08 14:12:02.578871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:59.771 [2024-12-08 14:12:02.578895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:59.771 [2024-12-08 14:12:02.578917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:59.771 [2024-12-08 14:12:02.578940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:59.771 [2024-12-08 14:12:02.578962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:59.771 [2024-12-08 14:12:02.578996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:59.771 [2024-12-08 14:12:02.579019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:59.771 [2024-12-08 14:12:02.579070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:59.771 [2024-12-08 14:12:02.579095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:59.771 [2024-12-08 14:12:02.579119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:59.771 [2024-12-08 14:12:02.579141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:59.771 [2024-12-08 14:12:02.579164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:59.771 [2024-12-08 14:12:02.579186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:59.771 [2024-12-08 14:12:02.579210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:59.771 [2024-12-08 14:12:02.579232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:59.771 [2024-12-08 14:12:02.579280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:59.771 [2024-12-08 14:12:02.580102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:59.771 [2024-12-08 14:12:02.580171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:59.771 [2024-12-08 14:12:02.580197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:59.771 [2024-12-08 14:12:02.580220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:59.771 [2024-12-08 14:12:02.580273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:59.771 [2024-12-08 14:12:02.580298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:59.771 [2024-12-08 14:12:02.580320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:59.771 [2024-12-08 14:12:02.580362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:59.771 [2024-12-08 14:12:02.580386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:59.771 [2024-12-08 14:12:02.580410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:59.771 [2024-12-08 14:12:02.580432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:59.771 [2024-12-08 14:12:02.580456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:59.771 [2024-12-08 14:12:02.580477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:59.771 [2024-12-08 14:12:02.580529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.580552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.580576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.580598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.580621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.580643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.580686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.580709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.580756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.580779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.580821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.580845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.580869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.580890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.580913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.580955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.580991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.581015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.581038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.581060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.581082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.581167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.581224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.581247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.581270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.581292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.581347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.581369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.581393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.581414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.581455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.581480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.581505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.581634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.581658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.581680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.581703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.581725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.581748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.581804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.581831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.581852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.581874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.581897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.581920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.581942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.581964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.582076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.582108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.582138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.582162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.582213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.582238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.582260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.582283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.582326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.582352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.582400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.582424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.582437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.582444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.582450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.582457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:59.772 [2024-12-08 14:12:02.582471] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:59.772 [2024-12-08 14:12:02.582480] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1c62f382-c384-45a2-bc0e-3a3545a6a62f 00:16:59.772 [2024-12-08 14:12:02.582487] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:59.772 [2024-12-08 14:12:02.582493] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:59.772 [2024-12-08 14:12:02.582499] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:59.772 [2024-12-08 14:12:02.582506] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:59.772 [2024-12-08 14:12:02.582512] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:59.772 [2024-12-08 14:12:02.582519] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:59.772 [2024-12-08 14:12:02.582525] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:59.772 [2024-12-08 14:12:02.582531] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:59.772 [2024-12-08 14:12:02.582535] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:59.772 [2024-12-08 14:12:02.582543] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.772 [2024-12-08 14:12:02.582550] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:59.772 [2024-12-08 14:12:02.582558] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.162 ms 00:16:59.772 [2024-12-08 14:12:02.582566] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.772 [2024-12-08 14:12:02.592666] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.772 [2024-12-08 14:12:02.592749] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:59.772 [2024-12-08 14:12:02.592791] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.060 ms 00:16:59.772 [2024-12-08 14:12:02.592809] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.772 [2024-12-08 14:12:02.593000] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.772 [2024-12-08 14:12:02.593087] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:59.772 [2024-12-08 14:12:02.593109] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:16:59.772 [2024-12-08 14:12:02.593123] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.772 [2024-12-08 14:12:02.628204] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:59.772 [2024-12-08 14:12:02.628298] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:59.772 [2024-12-08 14:12:02.628338] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:59.772 [2024-12-08 14:12:02.628356] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.772 [2024-12-08 14:12:02.628427] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:59.772 [2024-12-08 14:12:02.628445] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:59.772 [2024-12-08 14:12:02.628464] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:59.772 [2024-12-08 14:12:02.628478] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.772 [2024-12-08 14:12:02.628523] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:59.772 [2024-12-08 14:12:02.628540] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:59.773 [2024-12-08 14:12:02.628558] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:59.773 [2024-12-08 14:12:02.628600] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.773 [2024-12-08 14:12:02.628637] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:59.773 [2024-12-08 14:12:02.628653] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:59.773 [2024-12-08 14:12:02.628669] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:59.773 [2024-12-08 14:12:02.628685] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.032 [2024-12-08 14:12:02.689530] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.032 [2024-12-08 14:12:02.689642] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:00.032 [2024-12-08 14:12:02.689683] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.032 [2024-12-08 14:12:02.689702] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.032 [2024-12-08 14:12:02.712643] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.032 [2024-12-08 14:12:02.712747] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:00.032 [2024-12-08 14:12:02.712790] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.032 [2024-12-08 14:12:02.712807] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.032 [2024-12-08 14:12:02.712859] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.032 [2024-12-08 14:12:02.712877] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:00.032 [2024-12-08 14:12:02.712896] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.032 [2024-12-08 14:12:02.712910] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.032 [2024-12-08 14:12:02.712942] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.032 [2024-12-08 14:12:02.712957] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:00.032 [2024-12-08 14:12:02.712974] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.032 [2024-12-08 14:12:02.713033] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.032 [2024-12-08 14:12:02.713125] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.032 [2024-12-08 14:12:02.713144] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:00.032 [2024-12-08 14:12:02.713165] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.032 [2024-12-08 14:12:02.713476] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.032 [2024-12-08 14:12:02.713566] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.032 [2024-12-08 14:12:02.713635] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:00.032 [2024-12-08 14:12:02.713657] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.032 [2024-12-08 14:12:02.713694] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.032 [2024-12-08 14:12:02.713741] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.032 [2024-12-08 14:12:02.713759] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:00.032 [2024-12-08 14:12:02.713777] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.032 [2024-12-08 14:12:02.713791] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.032 [2024-12-08 14:12:02.713870] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.032 [2024-12-08 14:12:02.713890] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:00.032 [2024-12-08 14:12:02.713906] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.032 [2024-12-08 14:12:02.713920] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.032 [2024-12-08 14:12:02.714048] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 191.765 ms, result 0 00:17:00.601 14:12:03 -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:00.601 14:12:03 -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:00.601 [2024-12-08 14:12:03.413008] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:00.601 [2024-12-08 14:12:03.413322] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72315 ] 00:17:00.861 [2024-12-08 14:12:03.563357] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.861 [2024-12-08 14:12:03.708044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.119 [2024-12-08 14:12:03.912663] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:01.119 [2024-12-08 14:12:03.912868] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:01.380 [2024-12-08 14:12:04.053734] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.380 [2024-12-08 14:12:04.053869] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:01.380 [2024-12-08 14:12:04.053885] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:01.380 [2024-12-08 14:12:04.053892] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.380 [2024-12-08 14:12:04.055912] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.380 [2024-12-08 14:12:04.056040] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:01.380 [2024-12-08 14:12:04.056053] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.004 ms 00:17:01.380 [2024-12-08 14:12:04.056059] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.380 [2024-12-08 14:12:04.056115] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:01.380 [2024-12-08 14:12:04.056660] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:01.380 [2024-12-08 14:12:04.056676] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.380 [2024-12-08 14:12:04.056683] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:01.380 [2024-12-08 14:12:04.056689] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.567 ms 00:17:01.380 [2024-12-08 14:12:04.056695] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.380 [2024-12-08 14:12:04.057677] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:01.380 [2024-12-08 14:12:04.067336] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.380 [2024-12-08 14:12:04.067448] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:01.380 [2024-12-08 14:12:04.067461] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.661 ms 00:17:01.380 [2024-12-08 14:12:04.067468] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.380 [2024-12-08 14:12:04.067531] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.380 [2024-12-08 14:12:04.067540] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:01.380 [2024-12-08 14:12:04.067546] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:01.380 [2024-12-08 14:12:04.067551] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.380 [2024-12-08 14:12:04.071937] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.380 [2024-12-08 14:12:04.071962] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:01.380 [2024-12-08 14:12:04.071969] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.355 ms 00:17:01.380 [2024-12-08 14:12:04.071978] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.380 [2024-12-08 14:12:04.072078] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.380 [2024-12-08 14:12:04.072086] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:01.380 [2024-12-08 14:12:04.072092] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:17:01.380 [2024-12-08 14:12:04.072097] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.380 [2024-12-08 14:12:04.072114] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.380 [2024-12-08 14:12:04.072120] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:01.380 [2024-12-08 14:12:04.072126] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:01.380 [2024-12-08 14:12:04.072131] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.380 [2024-12-08 14:12:04.072154] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:01.380 [2024-12-08 14:12:04.074881] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.380 [2024-12-08 14:12:04.074975] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:01.380 [2024-12-08 14:12:04.074995] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.735 ms 00:17:01.380 [2024-12-08 14:12:04.075005] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.380 [2024-12-08 14:12:04.075036] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.380 [2024-12-08 14:12:04.075042] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:01.380 [2024-12-08 14:12:04.075048] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:01.380 [2024-12-08 14:12:04.075054] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.380 [2024-12-08 14:12:04.075067] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:01.380 [2024-12-08 14:12:04.075081] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:17:01.380 [2024-12-08 14:12:04.075106] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:01.380 [2024-12-08 14:12:04.075119] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:17:01.380 [2024-12-08 14:12:04.075175] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:01.380 [2024-12-08 14:12:04.075182] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:01.380 [2024-12-08 14:12:04.075190] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:01.380 [2024-12-08 14:12:04.075198] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:01.380 [2024-12-08 14:12:04.075204] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:01.380 [2024-12-08 14:12:04.075210] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:01.380 [2024-12-08 14:12:04.075215] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:01.380 [2024-12-08 14:12:04.075221] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:01.380 [2024-12-08 14:12:04.075228] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:01.380 [2024-12-08 14:12:04.075234] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.380 [2024-12-08 14:12:04.075239] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:01.380 [2024-12-08 14:12:04.075245] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.168 ms 00:17:01.380 [2024-12-08 14:12:04.075250] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.380 [2024-12-08 14:12:04.075299] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.380 [2024-12-08 14:12:04.075305] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:01.380 [2024-12-08 14:12:04.075311] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:01.380 [2024-12-08 14:12:04.075316] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.380 [2024-12-08 14:12:04.075371] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:01.380 [2024-12-08 14:12:04.075378] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:01.380 [2024-12-08 14:12:04.075384] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:01.380 [2024-12-08 14:12:04.075390] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:01.380 [2024-12-08 14:12:04.075395] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:01.380 [2024-12-08 14:12:04.075401] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:01.380 [2024-12-08 14:12:04.075406] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:01.380 [2024-12-08 14:12:04.075411] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:01.381 [2024-12-08 14:12:04.075417] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:01.381 [2024-12-08 14:12:04.075421] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:01.381 [2024-12-08 14:12:04.075426] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:01.381 [2024-12-08 14:12:04.075431] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:01.381 [2024-12-08 14:12:04.075437] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:01.381 [2024-12-08 14:12:04.075442] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:01.381 [2024-12-08 14:12:04.075452] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:17:01.381 [2024-12-08 14:12:04.075457] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:01.381 [2024-12-08 14:12:04.075462] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:01.381 [2024-12-08 14:12:04.075467] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:17:01.381 [2024-12-08 14:12:04.075472] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:01.381 [2024-12-08 14:12:04.075476] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:01.381 [2024-12-08 14:12:04.075481] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:17:01.381 [2024-12-08 14:12:04.075486] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:01.381 [2024-12-08 14:12:04.075491] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:01.381 [2024-12-08 14:12:04.075496] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:01.381 [2024-12-08 14:12:04.075501] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:01.381 [2024-12-08 14:12:04.075506] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:01.381 [2024-12-08 14:12:04.075511] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:17:01.381 [2024-12-08 14:12:04.075516] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:01.381 [2024-12-08 14:12:04.075521] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:01.381 [2024-12-08 14:12:04.075526] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:01.381 [2024-12-08 14:12:04.075530] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:01.381 [2024-12-08 14:12:04.075535] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:01.381 [2024-12-08 14:12:04.075540] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:17:01.381 [2024-12-08 14:12:04.075545] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:01.381 [2024-12-08 14:12:04.075549] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:01.381 [2024-12-08 14:12:04.075554] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:01.381 [2024-12-08 14:12:04.075559] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:01.381 [2024-12-08 14:12:04.075564] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:01.381 [2024-12-08 14:12:04.075569] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:17:01.381 [2024-12-08 14:12:04.075573] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:01.381 [2024-12-08 14:12:04.075578] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:01.381 [2024-12-08 14:12:04.075583] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:01.381 [2024-12-08 14:12:04.075588] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:01.381 [2024-12-08 14:12:04.075596] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:01.381 [2024-12-08 14:12:04.075603] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:01.381 [2024-12-08 14:12:04.075608] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:01.381 [2024-12-08 14:12:04.075613] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:01.381 [2024-12-08 14:12:04.075618] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:01.381 [2024-12-08 14:12:04.075623] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:01.381 [2024-12-08 14:12:04.075628] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:01.381 [2024-12-08 14:12:04.075634] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:01.381 [2024-12-08 14:12:04.075641] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:01.381 [2024-12-08 14:12:04.075647] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:01.381 [2024-12-08 14:12:04.075653] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:17:01.381 [2024-12-08 14:12:04.075658] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:17:01.381 [2024-12-08 14:12:04.075663] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:17:01.381 [2024-12-08 14:12:04.075669] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:17:01.381 [2024-12-08 14:12:04.075674] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:17:01.381 [2024-12-08 14:12:04.075679] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:17:01.381 [2024-12-08 14:12:04.075685] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:17:01.381 [2024-12-08 14:12:04.075690] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:17:01.381 [2024-12-08 14:12:04.075695] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:17:01.381 [2024-12-08 14:12:04.075701] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:17:01.381 [2024-12-08 14:12:04.075706] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:17:01.381 [2024-12-08 14:12:04.075712] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:17:01.381 [2024-12-08 14:12:04.075717] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:01.381 [2024-12-08 14:12:04.075725] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:01.381 [2024-12-08 14:12:04.075731] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:01.381 [2024-12-08 14:12:04.075736] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:01.381 [2024-12-08 14:12:04.075742] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:01.381 [2024-12-08 14:12:04.075747] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:01.381 [2024-12-08 14:12:04.075753] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.381 [2024-12-08 14:12:04.075758] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:01.381 [2024-12-08 14:12:04.075764] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.415 ms 00:17:01.381 [2024-12-08 14:12:04.075769] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.381 [2024-12-08 14:12:04.087714] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.381 [2024-12-08 14:12:04.087812] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:01.381 [2024-12-08 14:12:04.087853] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.914 ms 00:17:01.381 [2024-12-08 14:12:04.087870] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.381 [2024-12-08 14:12:04.087972] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.381 [2024-12-08 14:12:04.088027] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:01.381 [2024-12-08 14:12:04.088045] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:17:01.381 [2024-12-08 14:12:04.088059] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.381 [2024-12-08 14:12:04.124822] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.381 [2024-12-08 14:12:04.124933] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:01.381 [2024-12-08 14:12:04.124979] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.701 ms 00:17:01.381 [2024-12-08 14:12:04.125011] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.381 [2024-12-08 14:12:04.125076] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.381 [2024-12-08 14:12:04.125097] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:01.381 [2024-12-08 14:12:04.125118] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:01.381 [2024-12-08 14:12:04.125132] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.381 [2024-12-08 14:12:04.125446] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.381 [2024-12-08 14:12:04.125543] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:01.381 [2024-12-08 14:12:04.125585] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:17:01.381 [2024-12-08 14:12:04.125602] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.381 [2024-12-08 14:12:04.125706] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.381 [2024-12-08 14:12:04.125730] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:01.381 [2024-12-08 14:12:04.125802] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:17:01.381 [2024-12-08 14:12:04.125820] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.381 [2024-12-08 14:12:04.137138] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.381 [2024-12-08 14:12:04.137233] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:01.381 [2024-12-08 14:12:04.137274] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.290 ms 00:17:01.381 [2024-12-08 14:12:04.137294] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.381 [2024-12-08 14:12:04.146862] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:01.381 [2024-12-08 14:12:04.146962] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:01.381 [2024-12-08 14:12:04.147028] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.381 [2024-12-08 14:12:04.147045] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:01.381 [2024-12-08 14:12:04.147060] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.648 ms 00:17:01.382 [2024-12-08 14:12:04.147074] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.382 [2024-12-08 14:12:04.165835] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.382 [2024-12-08 14:12:04.165930] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:01.382 [2024-12-08 14:12:04.165970] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.711 ms 00:17:01.382 [2024-12-08 14:12:04.166000] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.382 [2024-12-08 14:12:04.175042] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.382 [2024-12-08 14:12:04.175128] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:01.382 [2024-12-08 14:12:04.175177] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.982 ms 00:17:01.382 [2024-12-08 14:12:04.175194] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.382 [2024-12-08 14:12:04.184048] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.382 [2024-12-08 14:12:04.184135] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:01.382 [2024-12-08 14:12:04.184176] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.809 ms 00:17:01.382 [2024-12-08 14:12:04.184192] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.382 [2024-12-08 14:12:04.184465] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.382 [2024-12-08 14:12:04.184523] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:01.382 [2024-12-08 14:12:04.184618] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:17:01.382 [2024-12-08 14:12:04.184639] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.382 [2024-12-08 14:12:04.229782] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.382 [2024-12-08 14:12:04.229887] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:01.382 [2024-12-08 14:12:04.229928] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.114 ms 00:17:01.382 [2024-12-08 14:12:04.229949] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.382 [2024-12-08 14:12:04.237731] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:01.382 [2024-12-08 14:12:04.249002] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.382 [2024-12-08 14:12:04.249098] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:01.382 [2024-12-08 14:12:04.249135] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.985 ms 00:17:01.382 [2024-12-08 14:12:04.249152] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.382 [2024-12-08 14:12:04.249220] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.382 [2024-12-08 14:12:04.249240] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:01.382 [2024-12-08 14:12:04.249257] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:01.382 [2024-12-08 14:12:04.249272] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.382 [2024-12-08 14:12:04.249321] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.382 [2024-12-08 14:12:04.249339] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:01.382 [2024-12-08 14:12:04.249354] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:17:01.382 [2024-12-08 14:12:04.249399] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.382 [2024-12-08 14:12:04.250321] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.382 [2024-12-08 14:12:04.250406] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:01.382 [2024-12-08 14:12:04.250445] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.893 ms 00:17:01.382 [2024-12-08 14:12:04.250462] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.382 [2024-12-08 14:12:04.250496] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.382 [2024-12-08 14:12:04.250515] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:01.382 [2024-12-08 14:12:04.250558] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:01.382 [2024-12-08 14:12:04.250575] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.382 [2024-12-08 14:12:04.250613] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:01.382 [2024-12-08 14:12:04.250631] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.382 [2024-12-08 14:12:04.250664] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:01.382 [2024-12-08 14:12:04.250682] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:17:01.382 [2024-12-08 14:12:04.250697] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.382 [2024-12-08 14:12:04.268940] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.382 [2024-12-08 14:12:04.269047] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:01.382 [2024-12-08 14:12:04.269089] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.193 ms 00:17:01.382 [2024-12-08 14:12:04.269106] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.382 [2024-12-08 14:12:04.269175] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.382 [2024-12-08 14:12:04.269212] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:01.382 [2024-12-08 14:12:04.269229] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:17:01.382 [2024-12-08 14:12:04.269243] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.382 [2024-12-08 14:12:04.270330] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:01.382 [2024-12-08 14:12:04.272872] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 216.378 ms, result 0 00:17:01.382 [2024-12-08 14:12:04.273612] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:01.382 [2024-12-08 14:12:04.284743] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:02.770  [2024-12-08T14:12:06.633Z] Copying: 27/256 [MB] (27 MBps) [2024-12-08T14:12:07.575Z] Copying: 49/256 [MB] (22 MBps) [2024-12-08T14:12:08.515Z] Copying: 72/256 [MB] (23 MBps) [2024-12-08T14:12:09.502Z] Copying: 94/256 [MB] (21 MBps) [2024-12-08T14:12:10.437Z] Copying: 115/256 [MB] (20 MBps) [2024-12-08T14:12:11.379Z] Copying: 136/256 [MB] (21 MBps) [2024-12-08T14:12:12.321Z] Copying: 155/256 [MB] (19 MBps) [2024-12-08T14:12:13.705Z] Copying: 166/256 [MB] (10 MBps) [2024-12-08T14:12:14.647Z] Copying: 178/256 [MB] (12 MBps) [2024-12-08T14:12:15.585Z] Copying: 189/256 [MB] (10 MBps) [2024-12-08T14:12:16.523Z] Copying: 203/256 [MB] (13 MBps) [2024-12-08T14:12:17.462Z] Copying: 214/256 [MB] (10 MBps) [2024-12-08T14:12:18.398Z] Copying: 229/256 [MB] (15 MBps) [2024-12-08T14:12:18.965Z] Copying: 248/256 [MB] (19 MBps) [2024-12-08T14:12:18.965Z] Copying: 256/256 [MB] (average 17 MBps)[2024-12-08 14:12:18.706766] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:16.045 [2024-12-08 14:12:18.714131] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.045 [2024-12-08 14:12:18.714169] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:16.045 [2024-12-08 14:12:18.714179] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:16.045 [2024-12-08 14:12:18.714186] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.045 [2024-12-08 14:12:18.714202] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:16.045 [2024-12-08 14:12:18.716202] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.045 [2024-12-08 14:12:18.716225] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:16.045 [2024-12-08 14:12:18.716233] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.989 ms 00:17:16.045 [2024-12-08 14:12:18.716239] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.045 [2024-12-08 14:12:18.716439] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.045 [2024-12-08 14:12:18.716446] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:16.045 [2024-12-08 14:12:18.716452] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.184 ms 00:17:16.045 [2024-12-08 14:12:18.716460] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.045 [2024-12-08 14:12:18.719241] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.045 [2024-12-08 14:12:18.719258] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:16.045 [2024-12-08 14:12:18.719265] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.770 ms 00:17:16.045 [2024-12-08 14:12:18.719271] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.045 [2024-12-08 14:12:18.724450] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.045 [2024-12-08 14:12:18.724471] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:16.045 [2024-12-08 14:12:18.724479] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.158 ms 00:17:16.045 [2024-12-08 14:12:18.724486] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.045 [2024-12-08 14:12:18.742396] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.045 [2024-12-08 14:12:18.742421] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:16.045 [2024-12-08 14:12:18.742429] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.865 ms 00:17:16.045 [2024-12-08 14:12:18.742434] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.045 [2024-12-08 14:12:18.753987] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.045 [2024-12-08 14:12:18.754011] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:16.045 [2024-12-08 14:12:18.754019] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.518 ms 00:17:16.045 [2024-12-08 14:12:18.754025] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.045 [2024-12-08 14:12:18.754127] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.045 [2024-12-08 14:12:18.754134] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:16.045 [2024-12-08 14:12:18.754140] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:17:16.045 [2024-12-08 14:12:18.754146] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.045 [2024-12-08 14:12:18.772046] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.045 [2024-12-08 14:12:18.772071] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:16.045 [2024-12-08 14:12:18.772078] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.888 ms 00:17:16.045 [2024-12-08 14:12:18.772084] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.045 [2024-12-08 14:12:18.790557] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.046 [2024-12-08 14:12:18.790581] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:16.046 [2024-12-08 14:12:18.790588] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.430 ms 00:17:16.046 [2024-12-08 14:12:18.790593] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.046 [2024-12-08 14:12:18.807862] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.046 [2024-12-08 14:12:18.807885] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:16.046 [2024-12-08 14:12:18.807892] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.235 ms 00:17:16.046 [2024-12-08 14:12:18.807898] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.046 [2024-12-08 14:12:18.825663] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.046 [2024-12-08 14:12:18.825771] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:16.046 [2024-12-08 14:12:18.825783] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.714 ms 00:17:16.046 [2024-12-08 14:12:18.825788] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.046 [2024-12-08 14:12:18.825818] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:16.046 [2024-12-08 14:12:18.825829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.825836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.825842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.825847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.825853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.825859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.825864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.825870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.825875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.825881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.825886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.825892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.825897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.825902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.825908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.825913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.825919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.825924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.825930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.825935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.825940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.825946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.825952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.825957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.825963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.825968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.825973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.825993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.825999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.826005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.826011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.826016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.826023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.826028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.826034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.826039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.826044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.826050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.826056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.826061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.826067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.826072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.826078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.826083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.826089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.826094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.826099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.826105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.826110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.826116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.826122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.826128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.826134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.826139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.826145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.826150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.826156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.826162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.826167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.826173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.826178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.826185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.826191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.826197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.826202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.826208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.826213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.826219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.826224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.826230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.826235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.826241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.826246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.826252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.826257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.826262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.826268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.826273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:16.046 [2024-12-08 14:12:18.826279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:16.047 [2024-12-08 14:12:18.826285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:16.047 [2024-12-08 14:12:18.826290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:16.047 [2024-12-08 14:12:18.826296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:16.047 [2024-12-08 14:12:18.826301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:16.047 [2024-12-08 14:12:18.826306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:16.047 [2024-12-08 14:12:18.826312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:16.047 [2024-12-08 14:12:18.826318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:16.047 [2024-12-08 14:12:18.826324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:16.047 [2024-12-08 14:12:18.826329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:16.047 [2024-12-08 14:12:18.826334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:16.047 [2024-12-08 14:12:18.826340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:16.047 [2024-12-08 14:12:18.826345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:16.047 [2024-12-08 14:12:18.826351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:16.047 [2024-12-08 14:12:18.826356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:16.047 [2024-12-08 14:12:18.826362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:16.047 [2024-12-08 14:12:18.826368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:16.047 [2024-12-08 14:12:18.826373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:16.047 [2024-12-08 14:12:18.826384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:16.047 [2024-12-08 14:12:18.826390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:16.047 [2024-12-08 14:12:18.826395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:16.047 [2024-12-08 14:12:18.826401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:16.047 [2024-12-08 14:12:18.826414] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:16.047 [2024-12-08 14:12:18.826419] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1c62f382-c384-45a2-bc0e-3a3545a6a62f 00:17:16.047 [2024-12-08 14:12:18.826425] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:16.047 [2024-12-08 14:12:18.826430] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:16.047 [2024-12-08 14:12:18.826435] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:16.047 [2024-12-08 14:12:18.826441] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:16.047 [2024-12-08 14:12:18.826446] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:16.047 [2024-12-08 14:12:18.826453] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:16.047 [2024-12-08 14:12:18.826459] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:16.047 [2024-12-08 14:12:18.826463] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:16.047 [2024-12-08 14:12:18.826468] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:16.047 [2024-12-08 14:12:18.826473] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.047 [2024-12-08 14:12:18.826478] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:16.047 [2024-12-08 14:12:18.826484] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.655 ms 00:17:16.047 [2024-12-08 14:12:18.826489] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.047 [2024-12-08 14:12:18.836171] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.047 [2024-12-08 14:12:18.836264] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:16.047 [2024-12-08 14:12:18.836279] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.669 ms 00:17:16.047 [2024-12-08 14:12:18.836285] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.047 [2024-12-08 14:12:18.836454] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.047 [2024-12-08 14:12:18.836463] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:16.047 [2024-12-08 14:12:18.836470] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:17:16.047 [2024-12-08 14:12:18.836477] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.047 [2024-12-08 14:12:18.865704] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.047 [2024-12-08 14:12:18.865738] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:16.047 [2024-12-08 14:12:18.865749] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.047 [2024-12-08 14:12:18.865755] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.047 [2024-12-08 14:12:18.865811] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.047 [2024-12-08 14:12:18.865818] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:16.047 [2024-12-08 14:12:18.865824] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.047 [2024-12-08 14:12:18.865829] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.047 [2024-12-08 14:12:18.865859] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.047 [2024-12-08 14:12:18.865866] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:16.047 [2024-12-08 14:12:18.865872] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.047 [2024-12-08 14:12:18.865879] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.047 [2024-12-08 14:12:18.865892] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.047 [2024-12-08 14:12:18.865898] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:16.047 [2024-12-08 14:12:18.865904] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.047 [2024-12-08 14:12:18.865909] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.047 [2024-12-08 14:12:18.922911] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.047 [2024-12-08 14:12:18.922941] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:16.047 [2024-12-08 14:12:18.922951] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.047 [2024-12-08 14:12:18.922958] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.047 [2024-12-08 14:12:18.945527] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.047 [2024-12-08 14:12:18.945552] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:16.047 [2024-12-08 14:12:18.945558] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.047 [2024-12-08 14:12:18.945564] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.047 [2024-12-08 14:12:18.945606] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.047 [2024-12-08 14:12:18.945613] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:16.047 [2024-12-08 14:12:18.945619] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.047 [2024-12-08 14:12:18.945624] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.047 [2024-12-08 14:12:18.945650] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.047 [2024-12-08 14:12:18.945655] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:16.047 [2024-12-08 14:12:18.945661] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.047 [2024-12-08 14:12:18.945667] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.047 [2024-12-08 14:12:18.945734] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.047 [2024-12-08 14:12:18.945741] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:16.047 [2024-12-08 14:12:18.945747] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.047 [2024-12-08 14:12:18.945753] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.047 [2024-12-08 14:12:18.945780] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.047 [2024-12-08 14:12:18.945786] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:16.047 [2024-12-08 14:12:18.945792] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.047 [2024-12-08 14:12:18.945798] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.047 [2024-12-08 14:12:18.945827] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.047 [2024-12-08 14:12:18.945833] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:16.047 [2024-12-08 14:12:18.945840] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.047 [2024-12-08 14:12:18.945845] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.047 [2024-12-08 14:12:18.945880] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.047 [2024-12-08 14:12:18.945889] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:16.047 [2024-12-08 14:12:18.945895] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.047 [2024-12-08 14:12:18.945900] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.047 [2024-12-08 14:12:18.946022] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 231.872 ms, result 0 00:17:16.988 00:17:16.988 00:17:16.988 14:12:19 -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:17:16.988 14:12:19 -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:17.557 14:12:20 -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:17.557 [2024-12-08 14:12:20.460676] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:17.557 [2024-12-08 14:12:20.460818] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72499 ] 00:17:17.817 [2024-12-08 14:12:20.614910] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.077 [2024-12-08 14:12:20.885968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.337 [2024-12-08 14:12:21.210851] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:18.337 [2024-12-08 14:12:21.210947] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:18.599 [2024-12-08 14:12:21.368673] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.599 [2024-12-08 14:12:21.368736] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:18.599 [2024-12-08 14:12:21.368753] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:18.599 [2024-12-08 14:12:21.368762] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.599 [2024-12-08 14:12:21.371902] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.599 [2024-12-08 14:12:21.371956] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:18.599 [2024-12-08 14:12:21.371968] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.120 ms 00:17:18.599 [2024-12-08 14:12:21.371977] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.599 [2024-12-08 14:12:21.372101] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:18.599 [2024-12-08 14:12:21.372907] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:18.599 [2024-12-08 14:12:21.372940] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.599 [2024-12-08 14:12:21.372951] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:18.599 [2024-12-08 14:12:21.372962] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.848 ms 00:17:18.599 [2024-12-08 14:12:21.372970] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.599 [2024-12-08 14:12:21.375269] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:18.599 [2024-12-08 14:12:21.390691] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.599 [2024-12-08 14:12:21.390975] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:18.599 [2024-12-08 14:12:21.391013] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.424 ms 00:17:18.599 [2024-12-08 14:12:21.391025] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.599 [2024-12-08 14:12:21.391267] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.599 [2024-12-08 14:12:21.391295] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:18.599 [2024-12-08 14:12:21.391305] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:17:18.599 [2024-12-08 14:12:21.391315] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.599 [2024-12-08 14:12:21.402692] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.599 [2024-12-08 14:12:21.402737] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:18.599 [2024-12-08 14:12:21.402750] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.326 ms 00:17:18.599 [2024-12-08 14:12:21.402765] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.599 [2024-12-08 14:12:21.402897] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.599 [2024-12-08 14:12:21.402908] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:18.599 [2024-12-08 14:12:21.402919] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:17:18.599 [2024-12-08 14:12:21.402933] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.599 [2024-12-08 14:12:21.402962] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.599 [2024-12-08 14:12:21.402972] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:18.599 [2024-12-08 14:12:21.403013] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:18.599 [2024-12-08 14:12:21.403023] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.599 [2024-12-08 14:12:21.403059] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:18.599 [2024-12-08 14:12:21.407765] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.599 [2024-12-08 14:12:21.407804] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:18.599 [2024-12-08 14:12:21.407816] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.725 ms 00:17:18.599 [2024-12-08 14:12:21.407828] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.599 [2024-12-08 14:12:21.407890] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.599 [2024-12-08 14:12:21.407901] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:18.599 [2024-12-08 14:12:21.407911] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:18.599 [2024-12-08 14:12:21.407918] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.599 [2024-12-08 14:12:21.407938] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:18.599 [2024-12-08 14:12:21.407963] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:17:18.599 [2024-12-08 14:12:21.408026] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:18.599 [2024-12-08 14:12:21.408048] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:17:18.600 [2024-12-08 14:12:21.408131] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:18.600 [2024-12-08 14:12:21.408142] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:18.600 [2024-12-08 14:12:21.408153] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:18.600 [2024-12-08 14:12:21.408165] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:18.600 [2024-12-08 14:12:21.408174] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:18.600 [2024-12-08 14:12:21.408183] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:18.600 [2024-12-08 14:12:21.408194] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:18.600 [2024-12-08 14:12:21.408203] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:18.600 [2024-12-08 14:12:21.408215] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:18.600 [2024-12-08 14:12:21.408225] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.600 [2024-12-08 14:12:21.408233] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:18.600 [2024-12-08 14:12:21.408242] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:17:18.600 [2024-12-08 14:12:21.408250] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.600 [2024-12-08 14:12:21.408319] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.600 [2024-12-08 14:12:21.408329] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:18.600 [2024-12-08 14:12:21.408338] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:17:18.600 [2024-12-08 14:12:21.408346] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.600 [2024-12-08 14:12:21.408424] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:18.600 [2024-12-08 14:12:21.408436] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:18.600 [2024-12-08 14:12:21.408445] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:18.600 [2024-12-08 14:12:21.408454] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:18.600 [2024-12-08 14:12:21.408465] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:18.600 [2024-12-08 14:12:21.408473] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:18.600 [2024-12-08 14:12:21.408481] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:18.600 [2024-12-08 14:12:21.408489] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:18.600 [2024-12-08 14:12:21.408496] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:18.600 [2024-12-08 14:12:21.408503] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:18.600 [2024-12-08 14:12:21.408513] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:18.600 [2024-12-08 14:12:21.408521] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:18.600 [2024-12-08 14:12:21.408528] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:18.600 [2024-12-08 14:12:21.408535] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:18.600 [2024-12-08 14:12:21.408551] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:17:18.600 [2024-12-08 14:12:21.408559] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:18.600 [2024-12-08 14:12:21.408565] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:18.600 [2024-12-08 14:12:21.408572] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:17:18.600 [2024-12-08 14:12:21.408578] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:18.600 [2024-12-08 14:12:21.408586] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:18.600 [2024-12-08 14:12:21.408593] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:17:18.600 [2024-12-08 14:12:21.408600] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:18.600 [2024-12-08 14:12:21.408606] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:18.600 [2024-12-08 14:12:21.408614] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:18.600 [2024-12-08 14:12:21.408621] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:18.600 [2024-12-08 14:12:21.408629] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:18.600 [2024-12-08 14:12:21.408636] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:17:18.600 [2024-12-08 14:12:21.408642] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:18.600 [2024-12-08 14:12:21.408648] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:18.600 [2024-12-08 14:12:21.408654] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:18.600 [2024-12-08 14:12:21.408661] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:18.600 [2024-12-08 14:12:21.408669] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:18.600 [2024-12-08 14:12:21.408676] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:17:18.600 [2024-12-08 14:12:21.408683] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:18.600 [2024-12-08 14:12:21.408690] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:18.600 [2024-12-08 14:12:21.408697] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:18.600 [2024-12-08 14:12:21.408704] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:18.600 [2024-12-08 14:12:21.408710] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:18.600 [2024-12-08 14:12:21.408718] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:17:18.600 [2024-12-08 14:12:21.408724] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:18.600 [2024-12-08 14:12:21.408731] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:18.600 [2024-12-08 14:12:21.408739] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:18.600 [2024-12-08 14:12:21.408749] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:18.600 [2024-12-08 14:12:21.408762] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:18.600 [2024-12-08 14:12:21.408770] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:18.600 [2024-12-08 14:12:21.408777] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:18.600 [2024-12-08 14:12:21.408784] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:18.600 [2024-12-08 14:12:21.408791] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:18.600 [2024-12-08 14:12:21.408798] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:18.600 [2024-12-08 14:12:21.408805] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:18.600 [2024-12-08 14:12:21.408813] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:18.600 [2024-12-08 14:12:21.408824] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:18.600 [2024-12-08 14:12:21.408833] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:18.600 [2024-12-08 14:12:21.408840] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:17:18.600 [2024-12-08 14:12:21.408848] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:17:18.600 [2024-12-08 14:12:21.408855] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:17:18.600 [2024-12-08 14:12:21.408862] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:17:18.600 [2024-12-08 14:12:21.408870] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:17:18.600 [2024-12-08 14:12:21.408877] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:17:18.600 [2024-12-08 14:12:21.408885] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:17:18.600 [2024-12-08 14:12:21.408893] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:17:18.600 [2024-12-08 14:12:21.408900] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:17:18.600 [2024-12-08 14:12:21.408907] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:17:18.600 [2024-12-08 14:12:21.408914] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:17:18.600 [2024-12-08 14:12:21.408922] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:17:18.600 [2024-12-08 14:12:21.408928] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:18.600 [2024-12-08 14:12:21.408942] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:18.600 [2024-12-08 14:12:21.408951] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:18.600 [2024-12-08 14:12:21.408959] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:18.600 [2024-12-08 14:12:21.408966] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:18.600 [2024-12-08 14:12:21.408975] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:18.600 [2024-12-08 14:12:21.409004] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.600 [2024-12-08 14:12:21.409014] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:18.600 [2024-12-08 14:12:21.409022] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.625 ms 00:17:18.600 [2024-12-08 14:12:21.409033] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.600 [2024-12-08 14:12:21.430729] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.600 [2024-12-08 14:12:21.430777] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:18.600 [2024-12-08 14:12:21.430789] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.647 ms 00:17:18.600 [2024-12-08 14:12:21.430799] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.600 [2024-12-08 14:12:21.430929] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.600 [2024-12-08 14:12:21.430940] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:18.600 [2024-12-08 14:12:21.430950] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:17:18.601 [2024-12-08 14:12:21.430960] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.601 [2024-12-08 14:12:21.479621] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.601 [2024-12-08 14:12:21.479675] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:18.601 [2024-12-08 14:12:21.479689] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.615 ms 00:17:18.601 [2024-12-08 14:12:21.479699] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.601 [2024-12-08 14:12:21.479784] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.601 [2024-12-08 14:12:21.479796] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:18.601 [2024-12-08 14:12:21.479809] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:18.601 [2024-12-08 14:12:21.479818] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.601 [2024-12-08 14:12:21.480532] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.601 [2024-12-08 14:12:21.480565] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:18.601 [2024-12-08 14:12:21.480576] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.689 ms 00:17:18.601 [2024-12-08 14:12:21.480585] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.601 [2024-12-08 14:12:21.480734] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.601 [2024-12-08 14:12:21.480747] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:18.601 [2024-12-08 14:12:21.480756] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:17:18.601 [2024-12-08 14:12:21.480765] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.601 [2024-12-08 14:12:21.500622] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.601 [2024-12-08 14:12:21.500896] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:18.601 [2024-12-08 14:12:21.500916] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.832 ms 00:17:18.601 [2024-12-08 14:12:21.500930] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.862 [2024-12-08 14:12:21.516396] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:18.862 [2024-12-08 14:12:21.516445] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:18.862 [2024-12-08 14:12:21.516458] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.862 [2024-12-08 14:12:21.516467] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:18.862 [2024-12-08 14:12:21.516478] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.382 ms 00:17:18.862 [2024-12-08 14:12:21.516487] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.862 [2024-12-08 14:12:21.542735] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.862 [2024-12-08 14:12:21.542788] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:18.862 [2024-12-08 14:12:21.542801] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.155 ms 00:17:18.862 [2024-12-08 14:12:21.542810] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.862 [2024-12-08 14:12:21.555866] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.862 [2024-12-08 14:12:21.555909] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:18.862 [2024-12-08 14:12:21.555932] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.966 ms 00:17:18.862 [2024-12-08 14:12:21.555941] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.862 [2024-12-08 14:12:21.568329] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.862 [2024-12-08 14:12:21.568388] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:18.862 [2024-12-08 14:12:21.568400] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.289 ms 00:17:18.862 [2024-12-08 14:12:21.568407] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.862 [2024-12-08 14:12:21.568817] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.862 [2024-12-08 14:12:21.568832] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:18.862 [2024-12-08 14:12:21.568842] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:17:18.862 [2024-12-08 14:12:21.568853] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.862 [2024-12-08 14:12:21.640434] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.862 [2024-12-08 14:12:21.640492] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:18.862 [2024-12-08 14:12:21.640508] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.554 ms 00:17:18.862 [2024-12-08 14:12:21.640525] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.862 [2024-12-08 14:12:21.651813] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:18.862 [2024-12-08 14:12:21.675668] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.862 [2024-12-08 14:12:21.675728] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:18.862 [2024-12-08 14:12:21.675742] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.038 ms 00:17:18.862 [2024-12-08 14:12:21.675751] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.862 [2024-12-08 14:12:21.675846] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.862 [2024-12-08 14:12:21.675856] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:18.862 [2024-12-08 14:12:21.675870] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:18.862 [2024-12-08 14:12:21.675879] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.862 [2024-12-08 14:12:21.675949] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.862 [2024-12-08 14:12:21.675960] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:18.862 [2024-12-08 14:12:21.675969] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:17:18.862 [2024-12-08 14:12:21.676009] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.862 [2024-12-08 14:12:21.677555] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.862 [2024-12-08 14:12:21.677602] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:18.862 [2024-12-08 14:12:21.677613] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.521 ms 00:17:18.862 [2024-12-08 14:12:21.677621] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.862 [2024-12-08 14:12:21.677663] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.862 [2024-12-08 14:12:21.677676] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:18.862 [2024-12-08 14:12:21.677686] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:18.862 [2024-12-08 14:12:21.677695] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.862 [2024-12-08 14:12:21.677740] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:18.862 [2024-12-08 14:12:21.677750] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.862 [2024-12-08 14:12:21.677759] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:18.862 [2024-12-08 14:12:21.677769] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:18.862 [2024-12-08 14:12:21.677778] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.862 [2024-12-08 14:12:21.704894] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.862 [2024-12-08 14:12:21.704942] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:18.862 [2024-12-08 14:12:21.704955] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.089 ms 00:17:18.863 [2024-12-08 14:12:21.704963] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.863 [2024-12-08 14:12:21.705094] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.863 [2024-12-08 14:12:21.705107] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:18.863 [2024-12-08 14:12:21.705117] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:17:18.863 [2024-12-08 14:12:21.705127] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.863 [2024-12-08 14:12:21.707058] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:18.863 [2024-12-08 14:12:21.710688] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 337.984 ms, result 0 00:17:18.863 [2024-12-08 14:12:21.712024] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:18.863 [2024-12-08 14:12:21.726155] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:19.123  [2024-12-08T14:12:22.043Z] Copying: 4096/4096 [kB] (average 14 MBps)[2024-12-08 14:12:21.997153] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:19.123 [2024-12-08 14:12:22.006203] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.123 [2024-12-08 14:12:22.006256] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:19.123 [2024-12-08 14:12:22.006269] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:19.123 [2024-12-08 14:12:22.006277] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.123 [2024-12-08 14:12:22.006303] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:19.123 [2024-12-08 14:12:22.009520] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.123 [2024-12-08 14:12:22.009560] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:19.123 [2024-12-08 14:12:22.009571] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.203 ms 00:17:19.123 [2024-12-08 14:12:22.009581] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.123 [2024-12-08 14:12:22.012725] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.123 [2024-12-08 14:12:22.012769] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:19.123 [2024-12-08 14:12:22.012780] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.115 ms 00:17:19.123 [2024-12-08 14:12:22.012796] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.123 [2024-12-08 14:12:22.017236] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.123 [2024-12-08 14:12:22.017273] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:19.123 [2024-12-08 14:12:22.017284] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.422 ms 00:17:19.123 [2024-12-08 14:12:22.017292] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.123 [2024-12-08 14:12:22.024180] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.123 [2024-12-08 14:12:22.024220] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:19.123 [2024-12-08 14:12:22.024232] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.853 ms 00:17:19.123 [2024-12-08 14:12:22.024248] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.385 [2024-12-08 14:12:22.049716] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.385 [2024-12-08 14:12:22.049762] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:19.385 [2024-12-08 14:12:22.049774] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.415 ms 00:17:19.385 [2024-12-08 14:12:22.049780] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.385 [2024-12-08 14:12:22.066779] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.386 [2024-12-08 14:12:22.066823] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:19.386 [2024-12-08 14:12:22.066836] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.937 ms 00:17:19.386 [2024-12-08 14:12:22.066844] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.386 [2024-12-08 14:12:22.067044] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.386 [2024-12-08 14:12:22.067059] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:19.386 [2024-12-08 14:12:22.067070] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:17:19.386 [2024-12-08 14:12:22.067078] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.386 [2024-12-08 14:12:22.093377] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.386 [2024-12-08 14:12:22.093422] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:19.386 [2024-12-08 14:12:22.093433] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.280 ms 00:17:19.386 [2024-12-08 14:12:22.093440] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.386 [2024-12-08 14:12:22.118911] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.386 [2024-12-08 14:12:22.119232] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:19.386 [2024-12-08 14:12:22.119254] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.396 ms 00:17:19.386 [2024-12-08 14:12:22.119262] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.386 [2024-12-08 14:12:22.144513] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.386 [2024-12-08 14:12:22.144558] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:19.386 [2024-12-08 14:12:22.144570] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.121 ms 00:17:19.386 [2024-12-08 14:12:22.144576] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.386 [2024-12-08 14:12:22.169728] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.386 [2024-12-08 14:12:22.169770] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:19.386 [2024-12-08 14:12:22.169781] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.062 ms 00:17:19.386 [2024-12-08 14:12:22.169790] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.386 [2024-12-08 14:12:22.169850] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:19.386 [2024-12-08 14:12:22.169868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.169880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.169888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.169896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.169904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.169913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.169920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.169928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.169935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.169943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.169951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.169959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.169967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.169974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.170002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.170010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.170018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.170027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.170035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.170042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.170051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.170058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.170065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.170074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.170082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.170090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.170100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.170108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.170116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.170125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.170133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.170141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.170150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.170184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.170193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.170200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.170208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.170216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.170224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.170232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.170239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.170248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.170256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.170264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.170272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.170280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.170287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.170295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.170304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.170313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.170321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.170328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.170336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.170344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.170351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.170368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.170376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.170386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.170394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.170402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.170410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.170418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.170426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.170436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.170443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.170451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:19.386 [2024-12-08 14:12:22.170458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:19.387 [2024-12-08 14:12:22.170466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:19.387 [2024-12-08 14:12:22.170473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:19.387 [2024-12-08 14:12:22.170482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:19.387 [2024-12-08 14:12:22.170491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:19.387 [2024-12-08 14:12:22.170498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:19.387 [2024-12-08 14:12:22.170505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:19.387 [2024-12-08 14:12:22.170513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:19.387 [2024-12-08 14:12:22.170520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:19.387 [2024-12-08 14:12:22.170527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:19.387 [2024-12-08 14:12:22.170534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:19.387 [2024-12-08 14:12:22.170541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:19.387 [2024-12-08 14:12:22.170549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:19.387 [2024-12-08 14:12:22.170558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:19.387 [2024-12-08 14:12:22.170565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:19.387 [2024-12-08 14:12:22.170572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:19.387 [2024-12-08 14:12:22.170579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:19.387 [2024-12-08 14:12:22.170587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:19.387 [2024-12-08 14:12:22.170594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:19.387 [2024-12-08 14:12:22.170600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:19.387 [2024-12-08 14:12:22.170608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:19.387 [2024-12-08 14:12:22.170616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:19.387 [2024-12-08 14:12:22.170623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:19.387 [2024-12-08 14:12:22.170631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:19.387 [2024-12-08 14:12:22.170638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:19.387 [2024-12-08 14:12:22.170646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:19.387 [2024-12-08 14:12:22.170654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:19.387 [2024-12-08 14:12:22.170661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:19.387 [2024-12-08 14:12:22.170670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:19.387 [2024-12-08 14:12:22.170679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:19.387 [2024-12-08 14:12:22.170696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:19.387 [2024-12-08 14:12:22.170704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:19.387 [2024-12-08 14:12:22.170713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:19.387 [2024-12-08 14:12:22.170721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:19.387 [2024-12-08 14:12:22.170737] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:19.387 [2024-12-08 14:12:22.170747] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1c62f382-c384-45a2-bc0e-3a3545a6a62f 00:17:19.387 [2024-12-08 14:12:22.170756] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:19.387 [2024-12-08 14:12:22.170764] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:19.387 [2024-12-08 14:12:22.170771] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:19.387 [2024-12-08 14:12:22.170779] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:19.387 [2024-12-08 14:12:22.170790] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:19.387 [2024-12-08 14:12:22.170798] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:19.387 [2024-12-08 14:12:22.170806] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:19.387 [2024-12-08 14:12:22.170814] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:19.387 [2024-12-08 14:12:22.170821] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:19.387 [2024-12-08 14:12:22.170828] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.387 [2024-12-08 14:12:22.170836] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:19.387 [2024-12-08 14:12:22.170845] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.980 ms 00:17:19.387 [2024-12-08 14:12:22.170857] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.387 [2024-12-08 14:12:22.184217] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.387 [2024-12-08 14:12:22.184258] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:19.387 [2024-12-08 14:12:22.184277] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.327 ms 00:17:19.387 [2024-12-08 14:12:22.184285] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.387 [2024-12-08 14:12:22.184533] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.387 [2024-12-08 14:12:22.184544] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:19.387 [2024-12-08 14:12:22.184553] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.200 ms 00:17:19.387 [2024-12-08 14:12:22.184561] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.387 [2024-12-08 14:12:22.229410] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.387 [2024-12-08 14:12:22.229458] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:19.387 [2024-12-08 14:12:22.229477] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.387 [2024-12-08 14:12:22.229486] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.387 [2024-12-08 14:12:22.229584] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.387 [2024-12-08 14:12:22.229596] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:19.387 [2024-12-08 14:12:22.229605] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.387 [2024-12-08 14:12:22.229614] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.387 [2024-12-08 14:12:22.229670] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.387 [2024-12-08 14:12:22.229681] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:19.387 [2024-12-08 14:12:22.229690] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.387 [2024-12-08 14:12:22.229703] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.387 [2024-12-08 14:12:22.229723] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.387 [2024-12-08 14:12:22.229731] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:19.387 [2024-12-08 14:12:22.229741] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.387 [2024-12-08 14:12:22.229750] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.648 [2024-12-08 14:12:22.316766] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.648 [2024-12-08 14:12:22.317113] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:19.648 [2024-12-08 14:12:22.317146] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.648 [2024-12-08 14:12:22.317155] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.648 [2024-12-08 14:12:22.352769] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.648 [2024-12-08 14:12:22.352973] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:19.648 [2024-12-08 14:12:22.353005] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.648 [2024-12-08 14:12:22.353015] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.648 [2024-12-08 14:12:22.353095] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.648 [2024-12-08 14:12:22.353106] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:19.648 [2024-12-08 14:12:22.353116] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.648 [2024-12-08 14:12:22.353125] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.648 [2024-12-08 14:12:22.353169] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.648 [2024-12-08 14:12:22.353181] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:19.648 [2024-12-08 14:12:22.353205] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.648 [2024-12-08 14:12:22.353214] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.648 [2024-12-08 14:12:22.353335] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.648 [2024-12-08 14:12:22.353348] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:19.648 [2024-12-08 14:12:22.353358] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.648 [2024-12-08 14:12:22.353366] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.648 [2024-12-08 14:12:22.353409] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.648 [2024-12-08 14:12:22.353420] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:19.648 [2024-12-08 14:12:22.353429] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.648 [2024-12-08 14:12:22.353437] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.648 [2024-12-08 14:12:22.353489] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.648 [2024-12-08 14:12:22.353501] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:19.648 [2024-12-08 14:12:22.353510] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.648 [2024-12-08 14:12:22.353520] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.648 [2024-12-08 14:12:22.353586] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.648 [2024-12-08 14:12:22.353602] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:19.648 [2024-12-08 14:12:22.353610] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.648 [2024-12-08 14:12:22.353620] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.648 [2024-12-08 14:12:22.353805] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 347.589 ms, result 0 00:17:20.593 00:17:20.593 00:17:20.593 14:12:23 -- ftl/trim.sh@93 -- # svcpid=72533 00:17:20.593 14:12:23 -- ftl/trim.sh@94 -- # waitforlisten 72533 00:17:20.593 14:12:23 -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:20.593 14:12:23 -- common/autotest_common.sh@829 -- # '[' -z 72533 ']' 00:17:20.593 14:12:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:20.593 14:12:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:20.593 14:12:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.593 14:12:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:20.593 14:12:23 -- common/autotest_common.sh@10 -- # set +x 00:17:20.593 [2024-12-08 14:12:23.446076] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:20.593 [2024-12-08 14:12:23.446240] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72533 ] 00:17:20.854 [2024-12-08 14:12:23.602211] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.116 [2024-12-08 14:12:23.872156] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:21.116 [2024-12-08 14:12:23.872405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.062 14:12:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:22.062 14:12:24 -- common/autotest_common.sh@862 -- # return 0 00:17:22.062 14:12:24 -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:22.337 [2024-12-08 14:12:25.161668] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:22.337 [2024-12-08 14:12:25.161974] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:22.599 [2024-12-08 14:12:25.336223] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.599 [2024-12-08 14:12:25.336453] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:22.599 [2024-12-08 14:12:25.336486] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:22.599 [2024-12-08 14:12:25.336496] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.599 [2024-12-08 14:12:25.339879] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.599 [2024-12-08 14:12:25.340115] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:22.599 [2024-12-08 14:12:25.340145] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.352 ms 00:17:22.599 [2024-12-08 14:12:25.340155] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.599 [2024-12-08 14:12:25.340332] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:22.599 [2024-12-08 14:12:25.341165] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:22.599 [2024-12-08 14:12:25.341222] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.599 [2024-12-08 14:12:25.341233] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:22.599 [2024-12-08 14:12:25.341246] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.909 ms 00:17:22.599 [2024-12-08 14:12:25.341255] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.599 [2024-12-08 14:12:25.343665] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:22.599 [2024-12-08 14:12:25.359180] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.599 [2024-12-08 14:12:25.359230] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:22.599 [2024-12-08 14:12:25.359245] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.522 ms 00:17:22.599 [2024-12-08 14:12:25.359257] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.599 [2024-12-08 14:12:25.359370] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.599 [2024-12-08 14:12:25.359384] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:22.599 [2024-12-08 14:12:25.359394] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:17:22.599 [2024-12-08 14:12:25.359404] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.599 [2024-12-08 14:12:25.370632] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.599 [2024-12-08 14:12:25.370823] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:22.599 [2024-12-08 14:12:25.370843] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.169 ms 00:17:22.599 [2024-12-08 14:12:25.370855] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.599 [2024-12-08 14:12:25.370975] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.599 [2024-12-08 14:12:25.371020] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:22.599 [2024-12-08 14:12:25.371031] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:17:22.599 [2024-12-08 14:12:25.371042] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.599 [2024-12-08 14:12:25.371075] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.599 [2024-12-08 14:12:25.371086] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:22.599 [2024-12-08 14:12:25.371096] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:22.599 [2024-12-08 14:12:25.371108] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.600 [2024-12-08 14:12:25.371142] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:22.600 [2024-12-08 14:12:25.375866] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.600 [2024-12-08 14:12:25.375905] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:22.600 [2024-12-08 14:12:25.375918] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.735 ms 00:17:22.600 [2024-12-08 14:12:25.375926] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.600 [2024-12-08 14:12:25.376013] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.600 [2024-12-08 14:12:25.376024] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:22.600 [2024-12-08 14:12:25.376035] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:22.600 [2024-12-08 14:12:25.376046] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.600 [2024-12-08 14:12:25.376073] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:22.600 [2024-12-08 14:12:25.376098] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:17:22.600 [2024-12-08 14:12:25.376140] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:22.600 [2024-12-08 14:12:25.376157] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:17:22.600 [2024-12-08 14:12:25.376241] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:22.600 [2024-12-08 14:12:25.376253] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:22.600 [2024-12-08 14:12:25.376271] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:22.600 [2024-12-08 14:12:25.376282] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:22.600 [2024-12-08 14:12:25.376293] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:22.600 [2024-12-08 14:12:25.376302] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:22.600 [2024-12-08 14:12:25.376313] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:22.600 [2024-12-08 14:12:25.376321] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:22.600 [2024-12-08 14:12:25.376334] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:22.600 [2024-12-08 14:12:25.376342] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.600 [2024-12-08 14:12:25.376352] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:22.600 [2024-12-08 14:12:25.376360] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:17:22.600 [2024-12-08 14:12:25.376372] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.600 [2024-12-08 14:12:25.376441] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.600 [2024-12-08 14:12:25.376453] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:22.600 [2024-12-08 14:12:25.376462] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:17:22.600 [2024-12-08 14:12:25.376471] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.600 [2024-12-08 14:12:25.376548] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:22.600 [2024-12-08 14:12:25.376563] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:22.600 [2024-12-08 14:12:25.376571] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:22.600 [2024-12-08 14:12:25.376582] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:22.600 [2024-12-08 14:12:25.376592] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:22.600 [2024-12-08 14:12:25.376602] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:22.600 [2024-12-08 14:12:25.376610] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:22.600 [2024-12-08 14:12:25.376625] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:22.600 [2024-12-08 14:12:25.376633] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:22.600 [2024-12-08 14:12:25.376642] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:22.600 [2024-12-08 14:12:25.376651] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:22.600 [2024-12-08 14:12:25.376659] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:22.600 [2024-12-08 14:12:25.376665] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:22.600 [2024-12-08 14:12:25.376675] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:22.600 [2024-12-08 14:12:25.376682] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:17:22.600 [2024-12-08 14:12:25.376694] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:22.600 [2024-12-08 14:12:25.376703] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:22.600 [2024-12-08 14:12:25.376712] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:17:22.600 [2024-12-08 14:12:25.376719] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:22.600 [2024-12-08 14:12:25.376727] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:22.600 [2024-12-08 14:12:25.376734] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:17:22.600 [2024-12-08 14:12:25.376744] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:22.600 [2024-12-08 14:12:25.376751] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:22.600 [2024-12-08 14:12:25.376763] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:22.600 [2024-12-08 14:12:25.376770] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:22.600 [2024-12-08 14:12:25.376788] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:22.600 [2024-12-08 14:12:25.376796] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:17:22.600 [2024-12-08 14:12:25.376804] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:22.600 [2024-12-08 14:12:25.376811] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:22.600 [2024-12-08 14:12:25.376819] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:22.600 [2024-12-08 14:12:25.376825] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:22.600 [2024-12-08 14:12:25.376837] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:22.600 [2024-12-08 14:12:25.376845] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:17:22.600 [2024-12-08 14:12:25.376854] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:22.600 [2024-12-08 14:12:25.376860] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:22.600 [2024-12-08 14:12:25.376871] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:22.600 [2024-12-08 14:12:25.376878] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:22.600 [2024-12-08 14:12:25.376886] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:22.600 [2024-12-08 14:12:25.376892] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:17:22.600 [2024-12-08 14:12:25.376903] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:22.600 [2024-12-08 14:12:25.376911] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:22.600 [2024-12-08 14:12:25.376923] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:22.600 [2024-12-08 14:12:25.376930] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:22.600 [2024-12-08 14:12:25.376942] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:22.600 [2024-12-08 14:12:25.376951] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:22.600 [2024-12-08 14:12:25.376959] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:22.600 [2024-12-08 14:12:25.376965] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:22.600 [2024-12-08 14:12:25.376976] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:22.600 [2024-12-08 14:12:25.377007] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:22.600 [2024-12-08 14:12:25.377018] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:22.600 [2024-12-08 14:12:25.377026] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:22.600 [2024-12-08 14:12:25.377039] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:22.600 [2024-12-08 14:12:25.377048] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:22.600 [2024-12-08 14:12:25.377058] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:17:22.600 [2024-12-08 14:12:25.377067] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:17:22.600 [2024-12-08 14:12:25.377081] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:17:22.600 [2024-12-08 14:12:25.377089] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:17:22.600 [2024-12-08 14:12:25.377099] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:17:22.600 [2024-12-08 14:12:25.377107] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:17:22.600 [2024-12-08 14:12:25.377117] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:17:22.600 [2024-12-08 14:12:25.377125] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:17:22.600 [2024-12-08 14:12:25.377165] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:17:22.600 [2024-12-08 14:12:25.377174] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:17:22.600 [2024-12-08 14:12:25.377199] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:17:22.600 [2024-12-08 14:12:25.377208] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:17:22.600 [2024-12-08 14:12:25.377219] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:22.600 [2024-12-08 14:12:25.377227] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:22.600 [2024-12-08 14:12:25.377237] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:22.600 [2024-12-08 14:12:25.377244] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:22.600 [2024-12-08 14:12:25.377255] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:22.600 [2024-12-08 14:12:25.377265] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:22.600 [2024-12-08 14:12:25.377277] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.600 [2024-12-08 14:12:25.377285] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:22.600 [2024-12-08 14:12:25.377296] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.768 ms 00:17:22.600 [2024-12-08 14:12:25.377303] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.600 [2024-12-08 14:12:25.399301] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.600 [2024-12-08 14:12:25.399351] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:22.600 [2024-12-08 14:12:25.399367] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.937 ms 00:17:22.601 [2024-12-08 14:12:25.399379] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.601 [2024-12-08 14:12:25.399517] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.601 [2024-12-08 14:12:25.399529] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:22.601 [2024-12-08 14:12:25.399540] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:17:22.601 [2024-12-08 14:12:25.399549] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.601 [2024-12-08 14:12:25.439632] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.601 [2024-12-08 14:12:25.439678] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:22.601 [2024-12-08 14:12:25.439692] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.058 ms 00:17:22.601 [2024-12-08 14:12:25.439701] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.601 [2024-12-08 14:12:25.439777] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.601 [2024-12-08 14:12:25.439786] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:22.601 [2024-12-08 14:12:25.439798] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:22.601 [2024-12-08 14:12:25.439806] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.601 [2024-12-08 14:12:25.440549] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.601 [2024-12-08 14:12:25.440590] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:22.601 [2024-12-08 14:12:25.440609] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.714 ms 00:17:22.601 [2024-12-08 14:12:25.440619] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.601 [2024-12-08 14:12:25.440774] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.601 [2024-12-08 14:12:25.440795] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:22.601 [2024-12-08 14:12:25.440810] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:17:22.601 [2024-12-08 14:12:25.440819] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.601 [2024-12-08 14:12:25.462173] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.601 [2024-12-08 14:12:25.462220] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:22.601 [2024-12-08 14:12:25.462235] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.326 ms 00:17:22.601 [2024-12-08 14:12:25.462243] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.601 [2024-12-08 14:12:25.477753] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:22.601 [2024-12-08 14:12:25.477799] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:22.601 [2024-12-08 14:12:25.477815] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.601 [2024-12-08 14:12:25.477824] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:22.601 [2024-12-08 14:12:25.477837] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.450 ms 00:17:22.601 [2024-12-08 14:12:25.477845] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.601 [2024-12-08 14:12:25.504117] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.601 [2024-12-08 14:12:25.504163] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:22.601 [2024-12-08 14:12:25.504179] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.178 ms 00:17:22.601 [2024-12-08 14:12:25.504188] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.861 [2024-12-08 14:12:25.517321] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.861 [2024-12-08 14:12:25.517373] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:22.861 [2024-12-08 14:12:25.517388] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.039 ms 00:17:22.861 [2024-12-08 14:12:25.517397] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.861 [2024-12-08 14:12:25.530159] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.861 [2024-12-08 14:12:25.530201] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:22.861 [2024-12-08 14:12:25.530218] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.677 ms 00:17:22.861 [2024-12-08 14:12:25.530226] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.861 [2024-12-08 14:12:25.530630] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.861 [2024-12-08 14:12:25.530648] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:22.861 [2024-12-08 14:12:25.530659] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:17:22.861 [2024-12-08 14:12:25.530669] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.861 [2024-12-08 14:12:25.603216] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.861 [2024-12-08 14:12:25.603487] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:22.861 [2024-12-08 14:12:25.603514] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.516 ms 00:17:22.861 [2024-12-08 14:12:25.603525] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.861 [2024-12-08 14:12:25.615188] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:22.861 [2024-12-08 14:12:25.639885] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.861 [2024-12-08 14:12:25.639946] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:22.861 [2024-12-08 14:12:25.639959] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.235 ms 00:17:22.861 [2024-12-08 14:12:25.639970] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.861 [2024-12-08 14:12:25.640087] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.861 [2024-12-08 14:12:25.640105] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:22.861 [2024-12-08 14:12:25.640120] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:22.861 [2024-12-08 14:12:25.640131] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.861 [2024-12-08 14:12:25.640195] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.861 [2024-12-08 14:12:25.640207] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:22.861 [2024-12-08 14:12:25.640216] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:17:22.861 [2024-12-08 14:12:25.640226] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.861 [2024-12-08 14:12:25.641778] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.861 [2024-12-08 14:12:25.641829] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:22.861 [2024-12-08 14:12:25.641840] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.526 ms 00:17:22.861 [2024-12-08 14:12:25.641854] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.861 [2024-12-08 14:12:25.641900] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.861 [2024-12-08 14:12:25.641911] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:22.861 [2024-12-08 14:12:25.641921] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:22.861 [2024-12-08 14:12:25.641931] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.861 [2024-12-08 14:12:25.641976] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:22.861 [2024-12-08 14:12:25.642010] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.861 [2024-12-08 14:12:25.642019] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:22.861 [2024-12-08 14:12:25.642030] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:17:22.861 [2024-12-08 14:12:25.642039] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.861 [2024-12-08 14:12:25.669371] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.861 [2024-12-08 14:12:25.669611] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:22.861 [2024-12-08 14:12:25.669638] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.298 ms 00:17:22.861 [2024-12-08 14:12:25.669649] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.861 [2024-12-08 14:12:25.669758] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.861 [2024-12-08 14:12:25.669769] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:22.861 [2024-12-08 14:12:25.669783] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:17:22.861 [2024-12-08 14:12:25.669795] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.861 [2024-12-08 14:12:25.671095] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:22.861 [2024-12-08 14:12:25.674762] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 334.473 ms, result 0 00:17:22.861 [2024-12-08 14:12:25.677095] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:22.861 Some configs were skipped because the RPC state that can call them passed over. 00:17:22.861 14:12:25 -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:23.121 [2024-12-08 14:12:25.928754] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.121 [2024-12-08 14:12:25.928950] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:17:23.121 [2024-12-08 14:12:25.929028] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.174 ms 00:17:23.121 [2024-12-08 14:12:25.929055] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.121 [2024-12-08 14:12:25.929115] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 26.536 ms, result 0 00:17:23.121 true 00:17:23.121 14:12:25 -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:23.381 [2024-12-08 14:12:26.143250] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.381 [2024-12-08 14:12:26.143346] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:17:23.381 [2024-12-08 14:12:26.143387] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.244 ms 00:17:23.381 [2024-12-08 14:12:26.143405] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.381 [2024-12-08 14:12:26.143445] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 18.436 ms, result 0 00:17:23.381 true 00:17:23.381 14:12:26 -- ftl/trim.sh@102 -- # killprocess 72533 00:17:23.381 14:12:26 -- common/autotest_common.sh@936 -- # '[' -z 72533 ']' 00:17:23.381 14:12:26 -- common/autotest_common.sh@940 -- # kill -0 72533 00:17:23.381 14:12:26 -- common/autotest_common.sh@941 -- # uname 00:17:23.381 14:12:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:23.381 14:12:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72533 00:17:23.381 killing process with pid 72533 00:17:23.381 14:12:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:23.381 14:12:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:23.381 14:12:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72533' 00:17:23.381 14:12:26 -- common/autotest_common.sh@955 -- # kill 72533 00:17:23.381 14:12:26 -- common/autotest_common.sh@960 -- # wait 72533 00:17:23.953 [2024-12-08 14:12:26.755296] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.953 [2024-12-08 14:12:26.755349] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:23.953 [2024-12-08 14:12:26.755360] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:23.953 [2024-12-08 14:12:26.755369] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.953 [2024-12-08 14:12:26.755389] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:23.953 [2024-12-08 14:12:26.757556] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.953 [2024-12-08 14:12:26.757582] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:23.953 [2024-12-08 14:12:26.757594] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.151 ms 00:17:23.953 [2024-12-08 14:12:26.757600] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.953 [2024-12-08 14:12:26.757832] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.953 [2024-12-08 14:12:26.757841] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:23.953 [2024-12-08 14:12:26.757849] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:17:23.953 [2024-12-08 14:12:26.757856] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.953 [2024-12-08 14:12:26.761374] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.953 [2024-12-08 14:12:26.761404] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:23.953 [2024-12-08 14:12:26.761414] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.501 ms 00:17:23.953 [2024-12-08 14:12:26.761420] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.953 [2024-12-08 14:12:26.766734] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.953 [2024-12-08 14:12:26.766903] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:23.953 [2024-12-08 14:12:26.766921] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.283 ms 00:17:23.953 [2024-12-08 14:12:26.766928] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.953 [2024-12-08 14:12:26.775044] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.953 [2024-12-08 14:12:26.775068] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:23.953 [2024-12-08 14:12:26.775079] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.066 ms 00:17:23.953 [2024-12-08 14:12:26.775085] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.953 [2024-12-08 14:12:26.782431] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.953 [2024-12-08 14:12:26.782455] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:23.953 [2024-12-08 14:12:26.782464] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.316 ms 00:17:23.953 [2024-12-08 14:12:26.782471] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.953 [2024-12-08 14:12:26.782581] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.953 [2024-12-08 14:12:26.782588] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:23.953 [2024-12-08 14:12:26.782597] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:17:23.953 [2024-12-08 14:12:26.782603] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.953 [2024-12-08 14:12:26.791511] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.953 [2024-12-08 14:12:26.791535] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:23.953 [2024-12-08 14:12:26.791543] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.891 ms 00:17:23.953 [2024-12-08 14:12:26.791549] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.953 [2024-12-08 14:12:26.799760] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.953 [2024-12-08 14:12:26.799782] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:23.953 [2024-12-08 14:12:26.799795] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.181 ms 00:17:23.953 [2024-12-08 14:12:26.799801] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.953 [2024-12-08 14:12:26.807560] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.953 [2024-12-08 14:12:26.807583] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:23.953 [2024-12-08 14:12:26.807592] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.729 ms 00:17:23.953 [2024-12-08 14:12:26.807597] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.953 [2024-12-08 14:12:26.815269] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.953 [2024-12-08 14:12:26.815291] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:23.953 [2024-12-08 14:12:26.815299] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.612 ms 00:17:23.953 [2024-12-08 14:12:26.815304] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.953 [2024-12-08 14:12:26.815332] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:23.954 [2024-12-08 14:12:26.815345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:23.954 [2024-12-08 14:12:26.815923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:23.955 [2024-12-08 14:12:26.815930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:23.955 [2024-12-08 14:12:26.815936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:23.955 [2024-12-08 14:12:26.815943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:23.955 [2024-12-08 14:12:26.815948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:23.955 [2024-12-08 14:12:26.815955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:23.955 [2024-12-08 14:12:26.815961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:23.955 [2024-12-08 14:12:26.815972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:23.955 [2024-12-08 14:12:26.815977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:23.955 [2024-12-08 14:12:26.815997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:23.955 [2024-12-08 14:12:26.816007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:23.955 [2024-12-08 14:12:26.816014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:23.955 [2024-12-08 14:12:26.816020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:23.955 [2024-12-08 14:12:26.816028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:23.955 [2024-12-08 14:12:26.816041] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:23.955 [2024-12-08 14:12:26.816050] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1c62f382-c384-45a2-bc0e-3a3545a6a62f 00:17:23.955 [2024-12-08 14:12:26.816056] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:23.955 [2024-12-08 14:12:26.816064] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:23.955 [2024-12-08 14:12:26.816070] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:23.955 [2024-12-08 14:12:26.816078] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:23.955 [2024-12-08 14:12:26.816084] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:23.955 [2024-12-08 14:12:26.816092] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:23.955 [2024-12-08 14:12:26.816099] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:23.955 [2024-12-08 14:12:26.816117] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:23.955 [2024-12-08 14:12:26.816122] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:23.955 [2024-12-08 14:12:26.816129] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.955 [2024-12-08 14:12:26.816135] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:23.955 [2024-12-08 14:12:26.816145] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.799 ms 00:17:23.955 [2024-12-08 14:12:26.816151] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.955 [2024-12-08 14:12:26.826193] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.955 [2024-12-08 14:12:26.826216] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:23.955 [2024-12-08 14:12:26.826227] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.025 ms 00:17:23.955 [2024-12-08 14:12:26.826233] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.955 [2024-12-08 14:12:26.826411] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.955 [2024-12-08 14:12:26.826421] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:23.955 [2024-12-08 14:12:26.826429] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:17:23.955 [2024-12-08 14:12:26.826435] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.214 [2024-12-08 14:12:26.872893] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.214 [2024-12-08 14:12:26.872944] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:24.214 [2024-12-08 14:12:26.872959] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.214 [2024-12-08 14:12:26.872967] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.214 [2024-12-08 14:12:26.873103] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.214 [2024-12-08 14:12:26.873116] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:24.214 [2024-12-08 14:12:26.873147] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.214 [2024-12-08 14:12:26.873155] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.214 [2024-12-08 14:12:26.873219] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.214 [2024-12-08 14:12:26.873230] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:24.214 [2024-12-08 14:12:26.873241] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.214 [2024-12-08 14:12:26.873248] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.214 [2024-12-08 14:12:26.873269] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.214 [2024-12-08 14:12:26.873278] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:24.214 [2024-12-08 14:12:26.873290] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.214 [2024-12-08 14:12:26.873298] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.214 [2024-12-08 14:12:26.949909] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.214 [2024-12-08 14:12:26.949952] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:24.214 [2024-12-08 14:12:26.949965] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.214 [2024-12-08 14:12:26.949972] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.214 [2024-12-08 14:12:26.978534] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.214 [2024-12-08 14:12:26.978570] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:24.214 [2024-12-08 14:12:26.978581] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.214 [2024-12-08 14:12:26.978589] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.214 [2024-12-08 14:12:26.978641] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.214 [2024-12-08 14:12:26.978650] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:24.214 [2024-12-08 14:12:26.978661] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.214 [2024-12-08 14:12:26.978669] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.214 [2024-12-08 14:12:26.978699] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.214 [2024-12-08 14:12:26.978707] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:24.214 [2024-12-08 14:12:26.978716] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.214 [2024-12-08 14:12:26.978725] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.214 [2024-12-08 14:12:26.978813] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.214 [2024-12-08 14:12:26.978822] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:24.214 [2024-12-08 14:12:26.978831] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.214 [2024-12-08 14:12:26.978838] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.214 [2024-12-08 14:12:26.978869] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.214 [2024-12-08 14:12:26.978878] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:24.214 [2024-12-08 14:12:26.978887] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.214 [2024-12-08 14:12:26.978895] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.214 [2024-12-08 14:12:26.978936] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.214 [2024-12-08 14:12:26.978945] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:24.214 [2024-12-08 14:12:26.978955] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.214 [2024-12-08 14:12:26.978963] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.215 [2024-12-08 14:12:26.979033] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.215 [2024-12-08 14:12:26.979044] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:24.215 [2024-12-08 14:12:26.979054] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.215 [2024-12-08 14:12:26.979062] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.215 [2024-12-08 14:12:26.979201] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 223.880 ms, result 0 00:17:25.155 14:12:27 -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:25.155 [2024-12-08 14:12:27.903527] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:25.155 [2024-12-08 14:12:27.903783] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72594 ] 00:17:25.155 [2024-12-08 14:12:28.052522] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.415 [2024-12-08 14:12:28.270382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:25.675 [2024-12-08 14:12:28.555259] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:25.675 [2024-12-08 14:12:28.555340] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:25.936 [2024-12-08 14:12:28.710733] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.936 [2024-12-08 14:12:28.710795] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:25.936 [2024-12-08 14:12:28.710809] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:25.936 [2024-12-08 14:12:28.710818] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.936 [2024-12-08 14:12:28.713742] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.936 [2024-12-08 14:12:28.713930] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:25.936 [2024-12-08 14:12:28.713951] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.904 ms 00:17:25.936 [2024-12-08 14:12:28.713959] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.936 [2024-12-08 14:12:28.714449] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:25.936 [2024-12-08 14:12:28.715297] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:25.936 [2024-12-08 14:12:28.715333] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.936 [2024-12-08 14:12:28.715342] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:25.936 [2024-12-08 14:12:28.715354] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.903 ms 00:17:25.936 [2024-12-08 14:12:28.715361] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.936 [2024-12-08 14:12:28.717071] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:25.936 [2024-12-08 14:12:28.731539] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.936 [2024-12-08 14:12:28.731585] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:25.936 [2024-12-08 14:12:28.731598] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.469 ms 00:17:25.936 [2024-12-08 14:12:28.731606] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.936 [2024-12-08 14:12:28.731720] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.936 [2024-12-08 14:12:28.731732] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:25.936 [2024-12-08 14:12:28.731741] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:17:25.936 [2024-12-08 14:12:28.731748] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.936 [2024-12-08 14:12:28.739547] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.936 [2024-12-08 14:12:28.739734] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:25.936 [2024-12-08 14:12:28.739754] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.753 ms 00:17:25.936 [2024-12-08 14:12:28.739768] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.936 [2024-12-08 14:12:28.739886] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.936 [2024-12-08 14:12:28.739896] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:25.936 [2024-12-08 14:12:28.739905] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:17:25.936 [2024-12-08 14:12:28.739913] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.936 [2024-12-08 14:12:28.739940] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.936 [2024-12-08 14:12:28.739949] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:25.936 [2024-12-08 14:12:28.739957] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:25.936 [2024-12-08 14:12:28.739965] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.936 [2024-12-08 14:12:28.740019] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:25.936 [2024-12-08 14:12:28.744130] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.936 [2024-12-08 14:12:28.744180] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:25.936 [2024-12-08 14:12:28.744191] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.126 ms 00:17:25.936 [2024-12-08 14:12:28.744201] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.936 [2024-12-08 14:12:28.744275] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.936 [2024-12-08 14:12:28.744285] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:25.936 [2024-12-08 14:12:28.744294] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:25.936 [2024-12-08 14:12:28.744301] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.936 [2024-12-08 14:12:28.744321] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:25.936 [2024-12-08 14:12:28.744343] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:17:25.936 [2024-12-08 14:12:28.744379] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:25.936 [2024-12-08 14:12:28.744397] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:17:25.936 [2024-12-08 14:12:28.744471] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:25.936 [2024-12-08 14:12:28.744482] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:25.936 [2024-12-08 14:12:28.744492] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:25.936 [2024-12-08 14:12:28.744502] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:25.936 [2024-12-08 14:12:28.744512] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:25.936 [2024-12-08 14:12:28.744519] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:25.936 [2024-12-08 14:12:28.744527] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:25.936 [2024-12-08 14:12:28.744534] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:25.936 [2024-12-08 14:12:28.744545] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:25.936 [2024-12-08 14:12:28.744554] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.936 [2024-12-08 14:12:28.744562] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:25.936 [2024-12-08 14:12:28.744569] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.235 ms 00:17:25.936 [2024-12-08 14:12:28.744577] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.936 [2024-12-08 14:12:28.744642] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.936 [2024-12-08 14:12:28.744652] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:25.936 [2024-12-08 14:12:28.744660] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:17:25.937 [2024-12-08 14:12:28.744667] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.937 [2024-12-08 14:12:28.744743] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:25.937 [2024-12-08 14:12:28.744753] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:25.937 [2024-12-08 14:12:28.744761] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:25.937 [2024-12-08 14:12:28.744769] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:25.937 [2024-12-08 14:12:28.744776] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:25.937 [2024-12-08 14:12:28.744783] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:25.937 [2024-12-08 14:12:28.744790] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:25.937 [2024-12-08 14:12:28.744798] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:25.937 [2024-12-08 14:12:28.744805] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:25.937 [2024-12-08 14:12:28.744812] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:25.937 [2024-12-08 14:12:28.744820] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:25.937 [2024-12-08 14:12:28.744826] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:25.937 [2024-12-08 14:12:28.744834] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:25.937 [2024-12-08 14:12:28.744841] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:25.937 [2024-12-08 14:12:28.744855] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:17:25.937 [2024-12-08 14:12:28.744862] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:25.937 [2024-12-08 14:12:28.744869] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:25.937 [2024-12-08 14:12:28.744876] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:17:25.937 [2024-12-08 14:12:28.744882] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:25.937 [2024-12-08 14:12:28.744888] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:25.937 [2024-12-08 14:12:28.744895] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:17:25.937 [2024-12-08 14:12:28.744902] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:25.937 [2024-12-08 14:12:28.744908] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:25.937 [2024-12-08 14:12:28.744915] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:25.937 [2024-12-08 14:12:28.744921] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:25.937 [2024-12-08 14:12:28.744928] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:25.937 [2024-12-08 14:12:28.744935] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:17:25.937 [2024-12-08 14:12:28.744941] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:25.937 [2024-12-08 14:12:28.744947] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:25.937 [2024-12-08 14:12:28.744953] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:25.937 [2024-12-08 14:12:28.744961] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:25.937 [2024-12-08 14:12:28.744968] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:25.937 [2024-12-08 14:12:28.744974] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:17:25.937 [2024-12-08 14:12:28.745003] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:25.937 [2024-12-08 14:12:28.745011] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:25.937 [2024-12-08 14:12:28.745017] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:25.937 [2024-12-08 14:12:28.745024] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:25.937 [2024-12-08 14:12:28.745030] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:25.937 [2024-12-08 14:12:28.745037] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:17:25.937 [2024-12-08 14:12:28.745043] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:25.937 [2024-12-08 14:12:28.745049] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:25.937 [2024-12-08 14:12:28.745056] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:25.937 [2024-12-08 14:12:28.745065] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:25.937 [2024-12-08 14:12:28.745078] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:25.937 [2024-12-08 14:12:28.745086] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:25.937 [2024-12-08 14:12:28.745093] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:25.937 [2024-12-08 14:12:28.745099] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:25.937 [2024-12-08 14:12:28.745106] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:25.937 [2024-12-08 14:12:28.745114] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:25.937 [2024-12-08 14:12:28.745121] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:25.937 [2024-12-08 14:12:28.745136] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:25.937 [2024-12-08 14:12:28.745147] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:25.937 [2024-12-08 14:12:28.745156] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:25.937 [2024-12-08 14:12:28.745163] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:17:25.937 [2024-12-08 14:12:28.745171] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:17:25.937 [2024-12-08 14:12:28.745178] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:17:25.937 [2024-12-08 14:12:28.745197] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:17:25.937 [2024-12-08 14:12:28.745204] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:17:25.937 [2024-12-08 14:12:28.745212] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:17:25.937 [2024-12-08 14:12:28.745219] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:17:25.937 [2024-12-08 14:12:28.745226] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:17:25.937 [2024-12-08 14:12:28.745234] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:17:25.937 [2024-12-08 14:12:28.745242] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:17:25.937 [2024-12-08 14:12:28.745249] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:17:25.937 [2024-12-08 14:12:28.745257] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:17:25.937 [2024-12-08 14:12:28.745264] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:25.937 [2024-12-08 14:12:28.745278] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:25.937 [2024-12-08 14:12:28.745286] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:25.937 [2024-12-08 14:12:28.745294] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:25.937 [2024-12-08 14:12:28.745301] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:25.937 [2024-12-08 14:12:28.745308] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:25.937 [2024-12-08 14:12:28.745316] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.937 [2024-12-08 14:12:28.745324] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:25.937 [2024-12-08 14:12:28.745332] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.616 ms 00:17:25.937 [2024-12-08 14:12:28.745340] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.937 [2024-12-08 14:12:28.763298] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.937 [2024-12-08 14:12:28.763339] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:25.937 [2024-12-08 14:12:28.763351] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.913 ms 00:17:25.937 [2024-12-08 14:12:28.763360] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.937 [2024-12-08 14:12:28.763487] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.937 [2024-12-08 14:12:28.763497] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:25.937 [2024-12-08 14:12:28.763507] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:17:25.937 [2024-12-08 14:12:28.763515] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.937 [2024-12-08 14:12:28.812356] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.937 [2024-12-08 14:12:28.812408] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:25.937 [2024-12-08 14:12:28.812421] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.817 ms 00:17:25.937 [2024-12-08 14:12:28.812430] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.937 [2024-12-08 14:12:28.812511] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.937 [2024-12-08 14:12:28.812522] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:25.937 [2024-12-08 14:12:28.812537] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:25.937 [2024-12-08 14:12:28.812544] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.937 [2024-12-08 14:12:28.813124] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.937 [2024-12-08 14:12:28.813152] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:25.937 [2024-12-08 14:12:28.813162] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.554 ms 00:17:25.937 [2024-12-08 14:12:28.813170] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.937 [2024-12-08 14:12:28.813332] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.937 [2024-12-08 14:12:28.813350] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:25.937 [2024-12-08 14:12:28.813360] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:17:25.937 [2024-12-08 14:12:28.813367] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.938 [2024-12-08 14:12:28.830214] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.938 [2024-12-08 14:12:28.830255] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:25.938 [2024-12-08 14:12:28.830267] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.821 ms 00:17:25.938 [2024-12-08 14:12:28.830278] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.938 [2024-12-08 14:12:28.843818] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:25.938 [2024-12-08 14:12:28.843939] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:25.938 [2024-12-08 14:12:28.843952] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.938 [2024-12-08 14:12:28.843960] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:25.938 [2024-12-08 14:12:28.843969] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.564 ms 00:17:25.938 [2024-12-08 14:12:28.843976] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.198 [2024-12-08 14:12:28.869010] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.198 [2024-12-08 14:12:28.869046] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:26.198 [2024-12-08 14:12:28.869057] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.957 ms 00:17:26.198 [2024-12-08 14:12:28.869064] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.198 [2024-12-08 14:12:28.881166] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.198 [2024-12-08 14:12:28.881202] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:26.198 [2024-12-08 14:12:28.881219] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.036 ms 00:17:26.198 [2024-12-08 14:12:28.881225] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.198 [2024-12-08 14:12:28.892894] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.198 [2024-12-08 14:12:28.892922] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:26.198 [2024-12-08 14:12:28.892932] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.609 ms 00:17:26.198 [2024-12-08 14:12:28.892939] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.198 [2024-12-08 14:12:28.893315] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.198 [2024-12-08 14:12:28.893331] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:26.198 [2024-12-08 14:12:28.893340] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:17:26.198 [2024-12-08 14:12:28.893350] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.198 [2024-12-08 14:12:28.952935] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.198 [2024-12-08 14:12:28.953009] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:26.198 [2024-12-08 14:12:28.953023] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.562 ms 00:17:26.198 [2024-12-08 14:12:28.953036] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.198 [2024-12-08 14:12:28.963637] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:26.198 [2024-12-08 14:12:28.979153] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.198 [2024-12-08 14:12:28.979189] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:26.198 [2024-12-08 14:12:28.979202] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.039 ms 00:17:26.198 [2024-12-08 14:12:28.979210] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.198 [2024-12-08 14:12:28.979282] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.198 [2024-12-08 14:12:28.979294] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:26.198 [2024-12-08 14:12:28.979304] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:26.198 [2024-12-08 14:12:28.979314] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.198 [2024-12-08 14:12:28.979359] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.198 [2024-12-08 14:12:28.979368] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:26.198 [2024-12-08 14:12:28.979375] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:17:26.198 [2024-12-08 14:12:28.979382] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.198 [2024-12-08 14:12:28.980592] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.198 [2024-12-08 14:12:28.980624] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:26.198 [2024-12-08 14:12:28.980633] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.189 ms 00:17:26.198 [2024-12-08 14:12:28.980640] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.198 [2024-12-08 14:12:28.980675] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.198 [2024-12-08 14:12:28.980683] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:26.198 [2024-12-08 14:12:28.980691] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:26.198 [2024-12-08 14:12:28.980698] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.198 [2024-12-08 14:12:28.980731] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:26.198 [2024-12-08 14:12:28.980740] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.198 [2024-12-08 14:12:28.980747] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:26.198 [2024-12-08 14:12:28.980755] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:26.198 [2024-12-08 14:12:28.980762] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.198 [2024-12-08 14:12:29.005201] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.198 [2024-12-08 14:12:29.005241] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:26.198 [2024-12-08 14:12:29.005252] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.415 ms 00:17:26.198 [2024-12-08 14:12:29.005260] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.198 [2024-12-08 14:12:29.005354] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.199 [2024-12-08 14:12:29.005364] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:26.199 [2024-12-08 14:12:29.005372] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:26.199 [2024-12-08 14:12:29.005380] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.199 [2024-12-08 14:12:29.006362] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:26.199 [2024-12-08 14:12:29.009716] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 295.346 ms, result 0 00:17:26.199 [2024-12-08 14:12:29.011191] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:26.199 [2024-12-08 14:12:29.024956] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:27.582  [2024-12-08T14:12:31.448Z] Copying: 20/256 [MB] (20 MBps) [2024-12-08T14:12:32.393Z] Copying: 40/256 [MB] (20 MBps) [2024-12-08T14:12:33.348Z] Copying: 62/256 [MB] (21 MBps) [2024-12-08T14:12:34.294Z] Copying: 86/256 [MB] (24 MBps) [2024-12-08T14:12:35.431Z] Copying: 108/256 [MB] (22 MBps) [2024-12-08T14:12:36.371Z] Copying: 131/256 [MB] (23 MBps) [2024-12-08T14:12:37.309Z] Copying: 153/256 [MB] (21 MBps) [2024-12-08T14:12:38.248Z] Copying: 175/256 [MB] (22 MBps) [2024-12-08T14:12:39.187Z] Copying: 195/256 [MB] (19 MBps) [2024-12-08T14:12:40.131Z] Copying: 217/256 [MB] (21 MBps) [2024-12-08T14:12:41.513Z] Copying: 235/256 [MB] (17 MBps) [2024-12-08T14:12:41.513Z] Copying: 248/256 [MB] (13 MBps) [2024-12-08T14:12:41.775Z] Copying: 256/256 [MB] (average 20 MBps)[2024-12-08 14:12:41.530249] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:38.855 [2024-12-08 14:12:41.547059] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.855 [2024-12-08 14:12:41.547233] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:38.855 [2024-12-08 14:12:41.547315] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:38.855 [2024-12-08 14:12:41.547341] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.855 [2024-12-08 14:12:41.547390] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:38.855 [2024-12-08 14:12:41.550653] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.855 [2024-12-08 14:12:41.550808] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:38.855 [2024-12-08 14:12:41.550876] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.221 ms 00:17:38.855 [2024-12-08 14:12:41.550900] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.855 [2024-12-08 14:12:41.551253] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.855 [2024-12-08 14:12:41.551287] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:38.855 [2024-12-08 14:12:41.551303] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:17:38.855 [2024-12-08 14:12:41.551312] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.855 [2024-12-08 14:12:41.555065] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.855 [2024-12-08 14:12:41.555089] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:38.855 [2024-12-08 14:12:41.555099] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.734 ms 00:17:38.855 [2024-12-08 14:12:41.555107] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.855 [2024-12-08 14:12:41.562007] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.855 [2024-12-08 14:12:41.562161] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:38.855 [2024-12-08 14:12:41.562181] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.864 ms 00:17:38.855 [2024-12-08 14:12:41.562196] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.855 [2024-12-08 14:12:41.588023] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.855 [2024-12-08 14:12:41.588073] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:38.856 [2024-12-08 14:12:41.588085] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.747 ms 00:17:38.856 [2024-12-08 14:12:41.588093] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.856 [2024-12-08 14:12:41.604417] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.856 [2024-12-08 14:12:41.604463] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:38.856 [2024-12-08 14:12:41.604475] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.255 ms 00:17:38.856 [2024-12-08 14:12:41.604483] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.856 [2024-12-08 14:12:41.604653] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.856 [2024-12-08 14:12:41.604665] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:38.856 [2024-12-08 14:12:41.604676] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:17:38.856 [2024-12-08 14:12:41.604684] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.856 [2024-12-08 14:12:41.630841] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.856 [2024-12-08 14:12:41.630885] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:38.856 [2024-12-08 14:12:41.630896] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.138 ms 00:17:38.856 [2024-12-08 14:12:41.630903] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.856 [2024-12-08 14:12:41.656701] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.856 [2024-12-08 14:12:41.656744] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:38.856 [2024-12-08 14:12:41.656756] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.721 ms 00:17:38.856 [2024-12-08 14:12:41.656763] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.856 [2024-12-08 14:12:41.681899] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.856 [2024-12-08 14:12:41.681941] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:38.856 [2024-12-08 14:12:41.681953] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.058 ms 00:17:38.856 [2024-12-08 14:12:41.681960] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.856 [2024-12-08 14:12:41.706757] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.856 [2024-12-08 14:12:41.706800] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:38.856 [2024-12-08 14:12:41.706811] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.686 ms 00:17:38.856 [2024-12-08 14:12:41.706818] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.856 [2024-12-08 14:12:41.706881] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:38.856 [2024-12-08 14:12:41.706898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.706908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.706916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.706923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.706931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.706939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.706947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.706954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.706962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.706970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.706978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.707006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.707015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.707022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.707029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.707037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.707045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.707053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.707060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.707068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.707075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.707083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.707090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.707098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.707105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.707113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.707122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.707130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.707137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.707169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.707179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.707187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.707195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.707202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.707210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.707218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.707226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.707234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.707242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.707250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.707258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.707266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.707274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.707281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.707289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.707296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.707304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.707312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.707320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.707328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.707336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.707353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.707361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.707368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.707376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.707383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.707391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.707399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.707407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.707414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.707421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.707428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.707438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.707447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.707454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.707463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:38.856 [2024-12-08 14:12:41.707470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:38.857 [2024-12-08 14:12:41.707478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:38.857 [2024-12-08 14:12:41.707486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:38.857 [2024-12-08 14:12:41.707493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:38.857 [2024-12-08 14:12:41.707501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:38.857 [2024-12-08 14:12:41.707508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:38.857 [2024-12-08 14:12:41.707516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:38.857 [2024-12-08 14:12:41.707523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:38.857 [2024-12-08 14:12:41.707530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:38.857 [2024-12-08 14:12:41.707538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:38.857 [2024-12-08 14:12:41.707546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:38.857 [2024-12-08 14:12:41.707553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:38.857 [2024-12-08 14:12:41.707561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:38.857 [2024-12-08 14:12:41.707569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:38.857 [2024-12-08 14:12:41.707576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:38.857 [2024-12-08 14:12:41.707584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:38.857 [2024-12-08 14:12:41.707591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:38.857 [2024-12-08 14:12:41.707599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:38.857 [2024-12-08 14:12:41.707606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:38.857 [2024-12-08 14:12:41.707613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:38.857 [2024-12-08 14:12:41.707620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:38.857 [2024-12-08 14:12:41.707627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:38.857 [2024-12-08 14:12:41.707634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:38.857 [2024-12-08 14:12:41.707642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:38.857 [2024-12-08 14:12:41.707649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:38.857 [2024-12-08 14:12:41.707656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:38.857 [2024-12-08 14:12:41.707664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:38.857 [2024-12-08 14:12:41.707671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:38.857 [2024-12-08 14:12:41.707682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:38.857 [2024-12-08 14:12:41.707690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:38.857 [2024-12-08 14:12:41.707707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:38.857 [2024-12-08 14:12:41.707714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:38.857 [2024-12-08 14:12:41.707722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:38.857 [2024-12-08 14:12:41.707729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:38.857 [2024-12-08 14:12:41.707745] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:38.857 [2024-12-08 14:12:41.707753] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1c62f382-c384-45a2-bc0e-3a3545a6a62f 00:17:38.857 [2024-12-08 14:12:41.707762] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:38.857 [2024-12-08 14:12:41.707770] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:38.857 [2024-12-08 14:12:41.707777] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:38.857 [2024-12-08 14:12:41.707786] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:38.857 [2024-12-08 14:12:41.707796] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:38.857 [2024-12-08 14:12:41.707805] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:38.857 [2024-12-08 14:12:41.707813] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:38.857 [2024-12-08 14:12:41.707819] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:38.857 [2024-12-08 14:12:41.707827] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:38.857 [2024-12-08 14:12:41.707834] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.857 [2024-12-08 14:12:41.707842] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:38.857 [2024-12-08 14:12:41.707851] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.955 ms 00:17:38.857 [2024-12-08 14:12:41.707859] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.857 [2024-12-08 14:12:41.721201] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.857 [2024-12-08 14:12:41.721240] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:38.857 [2024-12-08 14:12:41.721257] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.308 ms 00:17:38.857 [2024-12-08 14:12:41.721265] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.857 [2024-12-08 14:12:41.721507] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.857 [2024-12-08 14:12:41.721517] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:38.857 [2024-12-08 14:12:41.721526] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:17:38.857 [2024-12-08 14:12:41.721533] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.857 [2024-12-08 14:12:41.762960] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.857 [2024-12-08 14:12:41.763026] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:38.857 [2024-12-08 14:12:41.763037] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.857 [2024-12-08 14:12:41.763045] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.857 [2024-12-08 14:12:41.763143] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.857 [2024-12-08 14:12:41.763153] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:38.857 [2024-12-08 14:12:41.763161] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.857 [2024-12-08 14:12:41.763169] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.857 [2024-12-08 14:12:41.763219] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.857 [2024-12-08 14:12:41.763229] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:38.857 [2024-12-08 14:12:41.763242] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.857 [2024-12-08 14:12:41.763250] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.857 [2024-12-08 14:12:41.763268] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.857 [2024-12-08 14:12:41.763277] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:38.857 [2024-12-08 14:12:41.763284] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.857 [2024-12-08 14:12:41.763292] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.118 [2024-12-08 14:12:41.843388] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.118 [2024-12-08 14:12:41.843455] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:39.118 [2024-12-08 14:12:41.843469] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.118 [2024-12-08 14:12:41.843478] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.118 [2024-12-08 14:12:41.876231] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.118 [2024-12-08 14:12:41.876277] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:39.118 [2024-12-08 14:12:41.876288] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.118 [2024-12-08 14:12:41.876297] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.118 [2024-12-08 14:12:41.876362] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.118 [2024-12-08 14:12:41.876372] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:39.118 [2024-12-08 14:12:41.876381] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.118 [2024-12-08 14:12:41.876397] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.118 [2024-12-08 14:12:41.876430] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.118 [2024-12-08 14:12:41.876439] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:39.118 [2024-12-08 14:12:41.876448] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.118 [2024-12-08 14:12:41.876456] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.118 [2024-12-08 14:12:41.876558] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.118 [2024-12-08 14:12:41.876569] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:39.118 [2024-12-08 14:12:41.876578] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.118 [2024-12-08 14:12:41.876586] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.118 [2024-12-08 14:12:41.876623] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.118 [2024-12-08 14:12:41.876633] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:39.118 [2024-12-08 14:12:41.876641] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.118 [2024-12-08 14:12:41.876649] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.118 [2024-12-08 14:12:41.876692] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.118 [2024-12-08 14:12:41.876702] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:39.118 [2024-12-08 14:12:41.876711] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.118 [2024-12-08 14:12:41.876718] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.118 [2024-12-08 14:12:41.876776] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.118 [2024-12-08 14:12:41.876786] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:39.118 [2024-12-08 14:12:41.876796] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.118 [2024-12-08 14:12:41.876803] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.118 [2024-12-08 14:12:41.876960] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 329.949 ms, result 0 00:17:40.057 00:17:40.057 00:17:40.057 14:12:42 -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:40.627 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:17:40.627 14:12:43 -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:17:40.627 14:12:43 -- ftl/trim.sh@109 -- # fio_kill 00:17:40.627 14:12:43 -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:40.627 14:12:43 -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:40.627 14:12:43 -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:17:40.627 14:12:43 -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:40.627 14:12:43 -- ftl/trim.sh@20 -- # killprocess 72533 00:17:40.627 14:12:43 -- common/autotest_common.sh@936 -- # '[' -z 72533 ']' 00:17:40.627 14:12:43 -- common/autotest_common.sh@940 -- # kill -0 72533 00:17:40.627 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (72533) - No such process 00:17:40.627 Process with pid 72533 is not found 00:17:40.627 14:12:43 -- common/autotest_common.sh@963 -- # echo 'Process with pid 72533 is not found' 00:17:40.627 ************************************ 00:17:40.627 END TEST ftl_trim 00:17:40.627 ************************************ 00:17:40.627 00:17:40.627 real 1m11.986s 00:17:40.627 user 1m27.115s 00:17:40.627 sys 0m16.894s 00:17:40.627 14:12:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:40.627 14:12:43 -- common/autotest_common.sh@10 -- # set +x 00:17:40.627 14:12:43 -- ftl/ftl.sh@77 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:06.0 0000:00:07.0 00:17:40.627 14:12:43 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:17:40.627 14:12:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:40.627 14:12:43 -- common/autotest_common.sh@10 -- # set +x 00:17:40.627 ************************************ 00:17:40.627 START TEST ftl_restore 00:17:40.627 ************************************ 00:17:40.627 14:12:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:06.0 0000:00:07.0 00:17:40.887 * Looking for test storage... 00:17:40.887 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:40.887 14:12:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:40.887 14:12:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:40.887 14:12:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:40.887 14:12:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:40.887 14:12:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:40.887 14:12:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:40.887 14:12:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:40.887 14:12:43 -- scripts/common.sh@335 -- # IFS=.-: 00:17:40.887 14:12:43 -- scripts/common.sh@335 -- # read -ra ver1 00:17:40.887 14:12:43 -- scripts/common.sh@336 -- # IFS=.-: 00:17:40.887 14:12:43 -- scripts/common.sh@336 -- # read -ra ver2 00:17:40.887 14:12:43 -- scripts/common.sh@337 -- # local 'op=<' 00:17:40.887 14:12:43 -- scripts/common.sh@339 -- # ver1_l=2 00:17:40.887 14:12:43 -- scripts/common.sh@340 -- # ver2_l=1 00:17:40.887 14:12:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:40.887 14:12:43 -- scripts/common.sh@343 -- # case "$op" in 00:17:40.887 14:12:43 -- scripts/common.sh@344 -- # : 1 00:17:40.887 14:12:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:40.887 14:12:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:40.887 14:12:43 -- scripts/common.sh@364 -- # decimal 1 00:17:40.887 14:12:43 -- scripts/common.sh@352 -- # local d=1 00:17:40.887 14:12:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:40.887 14:12:43 -- scripts/common.sh@354 -- # echo 1 00:17:40.887 14:12:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:40.887 14:12:43 -- scripts/common.sh@365 -- # decimal 2 00:17:40.887 14:12:43 -- scripts/common.sh@352 -- # local d=2 00:17:40.887 14:12:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:40.887 14:12:43 -- scripts/common.sh@354 -- # echo 2 00:17:40.887 14:12:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:40.887 14:12:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:40.887 14:12:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:40.887 14:12:43 -- scripts/common.sh@367 -- # return 0 00:17:40.887 14:12:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:40.887 14:12:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:40.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:40.887 --rc genhtml_branch_coverage=1 00:17:40.887 --rc genhtml_function_coverage=1 00:17:40.887 --rc genhtml_legend=1 00:17:40.887 --rc geninfo_all_blocks=1 00:17:40.887 --rc geninfo_unexecuted_blocks=1 00:17:40.887 00:17:40.887 ' 00:17:40.887 14:12:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:40.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:40.887 --rc genhtml_branch_coverage=1 00:17:40.887 --rc genhtml_function_coverage=1 00:17:40.887 --rc genhtml_legend=1 00:17:40.887 --rc geninfo_all_blocks=1 00:17:40.887 --rc geninfo_unexecuted_blocks=1 00:17:40.887 00:17:40.887 ' 00:17:40.887 14:12:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:40.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:40.887 --rc genhtml_branch_coverage=1 00:17:40.887 --rc genhtml_function_coverage=1 00:17:40.887 --rc genhtml_legend=1 00:17:40.887 --rc geninfo_all_blocks=1 00:17:40.887 --rc geninfo_unexecuted_blocks=1 00:17:40.887 00:17:40.887 ' 00:17:40.887 14:12:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:40.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:40.887 --rc genhtml_branch_coverage=1 00:17:40.887 --rc genhtml_function_coverage=1 00:17:40.887 --rc genhtml_legend=1 00:17:40.887 --rc geninfo_all_blocks=1 00:17:40.887 --rc geninfo_unexecuted_blocks=1 00:17:40.887 00:17:40.887 ' 00:17:40.887 14:12:43 -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:40.887 14:12:43 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:17:40.887 14:12:43 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:40.887 14:12:43 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:40.887 14:12:43 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:40.887 14:12:43 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:40.887 14:12:43 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:40.887 14:12:43 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:40.887 14:12:43 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:40.887 14:12:43 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:40.887 14:12:43 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:40.887 14:12:43 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:40.887 14:12:43 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:40.887 14:12:43 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:40.887 14:12:43 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:40.887 14:12:43 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:40.887 14:12:43 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:40.887 14:12:43 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:40.887 14:12:43 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:40.887 14:12:43 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:40.887 14:12:43 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:40.887 14:12:43 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:40.887 14:12:43 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:40.887 14:12:43 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:40.887 14:12:43 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:40.887 14:12:43 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:40.887 14:12:43 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:40.887 14:12:43 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:40.887 14:12:43 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:40.887 14:12:43 -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:40.887 14:12:43 -- ftl/restore.sh@13 -- # mktemp -d 00:17:40.887 14:12:43 -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.D1pPKSFN0g 00:17:40.887 14:12:43 -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:40.887 14:12:43 -- ftl/restore.sh@16 -- # case $opt in 00:17:40.887 14:12:43 -- ftl/restore.sh@18 -- # nv_cache=0000:00:06.0 00:17:40.887 14:12:43 -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:40.887 14:12:43 -- ftl/restore.sh@23 -- # shift 2 00:17:40.887 14:12:43 -- ftl/restore.sh@24 -- # device=0000:00:07.0 00:17:40.887 14:12:43 -- ftl/restore.sh@25 -- # timeout=240 00:17:40.887 14:12:43 -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:17:40.887 14:12:43 -- ftl/restore.sh@39 -- # svcpid=72823 00:17:40.887 14:12:43 -- ftl/restore.sh@41 -- # waitforlisten 72823 00:17:40.887 14:12:43 -- common/autotest_common.sh@829 -- # '[' -z 72823 ']' 00:17:40.888 14:12:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.888 14:12:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:40.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:40.888 14:12:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.888 14:12:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:40.888 14:12:43 -- common/autotest_common.sh@10 -- # set +x 00:17:40.888 14:12:43 -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:40.888 [2024-12-08 14:12:43.784095] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:40.888 [2024-12-08 14:12:43.784232] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72823 ] 00:17:41.148 [2024-12-08 14:12:43.938018] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.407 [2024-12-08 14:12:44.161623] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:41.407 [2024-12-08 14:12:44.161859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:42.798 14:12:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:42.798 14:12:45 -- common/autotest_common.sh@862 -- # return 0 00:17:42.798 14:12:45 -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:17:42.798 14:12:45 -- ftl/common.sh@54 -- # local name=nvme0 00:17:42.798 14:12:45 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:17:42.798 14:12:45 -- ftl/common.sh@56 -- # local size=103424 00:17:42.798 14:12:45 -- ftl/common.sh@59 -- # local base_bdev 00:17:42.798 14:12:45 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:17:42.798 14:12:45 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:42.798 14:12:45 -- ftl/common.sh@62 -- # local base_size 00:17:42.798 14:12:45 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:42.798 14:12:45 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:17:42.798 14:12:45 -- common/autotest_common.sh@1368 -- # local bdev_info 00:17:42.798 14:12:45 -- common/autotest_common.sh@1369 -- # local bs 00:17:42.798 14:12:45 -- common/autotest_common.sh@1370 -- # local nb 00:17:42.798 14:12:45 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:43.058 14:12:45 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:17:43.058 { 00:17:43.058 "name": "nvme0n1", 00:17:43.058 "aliases": [ 00:17:43.058 "8d0394ed-e358-4d8a-bb80-76998f1b9e91" 00:17:43.058 ], 00:17:43.058 "product_name": "NVMe disk", 00:17:43.058 "block_size": 4096, 00:17:43.058 "num_blocks": 1310720, 00:17:43.058 "uuid": "8d0394ed-e358-4d8a-bb80-76998f1b9e91", 00:17:43.058 "assigned_rate_limits": { 00:17:43.058 "rw_ios_per_sec": 0, 00:17:43.058 "rw_mbytes_per_sec": 0, 00:17:43.058 "r_mbytes_per_sec": 0, 00:17:43.058 "w_mbytes_per_sec": 0 00:17:43.058 }, 00:17:43.058 "claimed": true, 00:17:43.058 "claim_type": "read_many_write_one", 00:17:43.058 "zoned": false, 00:17:43.058 "supported_io_types": { 00:17:43.058 "read": true, 00:17:43.058 "write": true, 00:17:43.058 "unmap": true, 00:17:43.058 "write_zeroes": true, 00:17:43.058 "flush": true, 00:17:43.058 "reset": true, 00:17:43.058 "compare": true, 00:17:43.058 "compare_and_write": false, 00:17:43.058 "abort": true, 00:17:43.058 "nvme_admin": true, 00:17:43.058 "nvme_io": true 00:17:43.058 }, 00:17:43.058 "driver_specific": { 00:17:43.058 "nvme": [ 00:17:43.058 { 00:17:43.058 "pci_address": "0000:00:07.0", 00:17:43.058 "trid": { 00:17:43.058 "trtype": "PCIe", 00:17:43.058 "traddr": "0000:00:07.0" 00:17:43.058 }, 00:17:43.058 "ctrlr_data": { 00:17:43.058 "cntlid": 0, 00:17:43.058 "vendor_id": "0x1b36", 00:17:43.058 "model_number": "QEMU NVMe Ctrl", 00:17:43.058 "serial_number": "12341", 00:17:43.058 "firmware_revision": "8.0.0", 00:17:43.058 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:43.058 "oacs": { 00:17:43.058 "security": 0, 00:17:43.058 "format": 1, 00:17:43.058 "firmware": 0, 00:17:43.058 "ns_manage": 1 00:17:43.058 }, 00:17:43.058 "multi_ctrlr": false, 00:17:43.058 "ana_reporting": false 00:17:43.058 }, 00:17:43.058 "vs": { 00:17:43.058 "nvme_version": "1.4" 00:17:43.058 }, 00:17:43.058 "ns_data": { 00:17:43.058 "id": 1, 00:17:43.058 "can_share": false 00:17:43.058 } 00:17:43.058 } 00:17:43.058 ], 00:17:43.058 "mp_policy": "active_passive" 00:17:43.058 } 00:17:43.058 } 00:17:43.058 ]' 00:17:43.058 14:12:45 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:17:43.058 14:12:45 -- common/autotest_common.sh@1372 -- # bs=4096 00:17:43.058 14:12:45 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:17:43.058 14:12:45 -- common/autotest_common.sh@1373 -- # nb=1310720 00:17:43.058 14:12:45 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:17:43.058 14:12:45 -- common/autotest_common.sh@1377 -- # echo 5120 00:17:43.058 14:12:45 -- ftl/common.sh@63 -- # base_size=5120 00:17:43.058 14:12:45 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:43.058 14:12:45 -- ftl/common.sh@67 -- # clear_lvols 00:17:43.058 14:12:45 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:43.058 14:12:45 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:43.317 14:12:46 -- ftl/common.sh@28 -- # stores=25b17777-e6ea-40ac-aa51-b76afed73337 00:17:43.317 14:12:46 -- ftl/common.sh@29 -- # for lvs in $stores 00:17:43.317 14:12:46 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 25b17777-e6ea-40ac-aa51-b76afed73337 00:17:43.575 14:12:46 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:43.832 14:12:46 -- ftl/common.sh@68 -- # lvs=9b65a27c-f327-42b2-9716-a1975e407e1d 00:17:43.832 14:12:46 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 9b65a27c-f327-42b2-9716-a1975e407e1d 00:17:43.832 14:12:46 -- ftl/restore.sh@43 -- # split_bdev=48435efe-3581-408c-bb7e-1940f0f514fb 00:17:43.832 14:12:46 -- ftl/restore.sh@44 -- # '[' -n 0000:00:06.0 ']' 00:17:43.832 14:12:46 -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:06.0 48435efe-3581-408c-bb7e-1940f0f514fb 00:17:43.832 14:12:46 -- ftl/common.sh@35 -- # local name=nvc0 00:17:43.832 14:12:46 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:17:43.832 14:12:46 -- ftl/common.sh@37 -- # local base_bdev=48435efe-3581-408c-bb7e-1940f0f514fb 00:17:43.832 14:12:46 -- ftl/common.sh@38 -- # local cache_size= 00:17:43.832 14:12:46 -- ftl/common.sh@41 -- # get_bdev_size 48435efe-3581-408c-bb7e-1940f0f514fb 00:17:43.832 14:12:46 -- common/autotest_common.sh@1367 -- # local bdev_name=48435efe-3581-408c-bb7e-1940f0f514fb 00:17:43.832 14:12:46 -- common/autotest_common.sh@1368 -- # local bdev_info 00:17:43.832 14:12:46 -- common/autotest_common.sh@1369 -- # local bs 00:17:43.832 14:12:46 -- common/autotest_common.sh@1370 -- # local nb 00:17:43.832 14:12:46 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 48435efe-3581-408c-bb7e-1940f0f514fb 00:17:44.090 14:12:46 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:17:44.090 { 00:17:44.090 "name": "48435efe-3581-408c-bb7e-1940f0f514fb", 00:17:44.090 "aliases": [ 00:17:44.090 "lvs/nvme0n1p0" 00:17:44.090 ], 00:17:44.090 "product_name": "Logical Volume", 00:17:44.090 "block_size": 4096, 00:17:44.090 "num_blocks": 26476544, 00:17:44.090 "uuid": "48435efe-3581-408c-bb7e-1940f0f514fb", 00:17:44.090 "assigned_rate_limits": { 00:17:44.090 "rw_ios_per_sec": 0, 00:17:44.090 "rw_mbytes_per_sec": 0, 00:17:44.090 "r_mbytes_per_sec": 0, 00:17:44.090 "w_mbytes_per_sec": 0 00:17:44.090 }, 00:17:44.090 "claimed": false, 00:17:44.090 "zoned": false, 00:17:44.090 "supported_io_types": { 00:17:44.090 "read": true, 00:17:44.090 "write": true, 00:17:44.090 "unmap": true, 00:17:44.090 "write_zeroes": true, 00:17:44.090 "flush": false, 00:17:44.090 "reset": true, 00:17:44.090 "compare": false, 00:17:44.090 "compare_and_write": false, 00:17:44.090 "abort": false, 00:17:44.090 "nvme_admin": false, 00:17:44.090 "nvme_io": false 00:17:44.090 }, 00:17:44.090 "driver_specific": { 00:17:44.090 "lvol": { 00:17:44.090 "lvol_store_uuid": "9b65a27c-f327-42b2-9716-a1975e407e1d", 00:17:44.090 "base_bdev": "nvme0n1", 00:17:44.090 "thin_provision": true, 00:17:44.090 "snapshot": false, 00:17:44.090 "clone": false, 00:17:44.090 "esnap_clone": false 00:17:44.090 } 00:17:44.090 } 00:17:44.090 } 00:17:44.090 ]' 00:17:44.090 14:12:46 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:17:44.090 14:12:46 -- common/autotest_common.sh@1372 -- # bs=4096 00:17:44.090 14:12:46 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:17:44.090 14:12:46 -- common/autotest_common.sh@1373 -- # nb=26476544 00:17:44.090 14:12:46 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:17:44.090 14:12:46 -- common/autotest_common.sh@1377 -- # echo 103424 00:17:44.090 14:12:46 -- ftl/common.sh@41 -- # local base_size=5171 00:17:44.090 14:12:46 -- ftl/common.sh@44 -- # local nvc_bdev 00:17:44.090 14:12:46 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:17:44.351 14:12:47 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:44.351 14:12:47 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:44.351 14:12:47 -- ftl/common.sh@48 -- # get_bdev_size 48435efe-3581-408c-bb7e-1940f0f514fb 00:17:44.351 14:12:47 -- common/autotest_common.sh@1367 -- # local bdev_name=48435efe-3581-408c-bb7e-1940f0f514fb 00:17:44.351 14:12:47 -- common/autotest_common.sh@1368 -- # local bdev_info 00:17:44.351 14:12:47 -- common/autotest_common.sh@1369 -- # local bs 00:17:44.351 14:12:47 -- common/autotest_common.sh@1370 -- # local nb 00:17:44.351 14:12:47 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 48435efe-3581-408c-bb7e-1940f0f514fb 00:17:44.611 14:12:47 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:17:44.611 { 00:17:44.611 "name": "48435efe-3581-408c-bb7e-1940f0f514fb", 00:17:44.611 "aliases": [ 00:17:44.611 "lvs/nvme0n1p0" 00:17:44.611 ], 00:17:44.611 "product_name": "Logical Volume", 00:17:44.611 "block_size": 4096, 00:17:44.611 "num_blocks": 26476544, 00:17:44.611 "uuid": "48435efe-3581-408c-bb7e-1940f0f514fb", 00:17:44.611 "assigned_rate_limits": { 00:17:44.611 "rw_ios_per_sec": 0, 00:17:44.611 "rw_mbytes_per_sec": 0, 00:17:44.611 "r_mbytes_per_sec": 0, 00:17:44.611 "w_mbytes_per_sec": 0 00:17:44.611 }, 00:17:44.611 "claimed": false, 00:17:44.611 "zoned": false, 00:17:44.611 "supported_io_types": { 00:17:44.611 "read": true, 00:17:44.612 "write": true, 00:17:44.612 "unmap": true, 00:17:44.612 "write_zeroes": true, 00:17:44.612 "flush": false, 00:17:44.612 "reset": true, 00:17:44.612 "compare": false, 00:17:44.612 "compare_and_write": false, 00:17:44.612 "abort": false, 00:17:44.612 "nvme_admin": false, 00:17:44.612 "nvme_io": false 00:17:44.612 }, 00:17:44.612 "driver_specific": { 00:17:44.612 "lvol": { 00:17:44.612 "lvol_store_uuid": "9b65a27c-f327-42b2-9716-a1975e407e1d", 00:17:44.612 "base_bdev": "nvme0n1", 00:17:44.612 "thin_provision": true, 00:17:44.612 "snapshot": false, 00:17:44.612 "clone": false, 00:17:44.612 "esnap_clone": false 00:17:44.612 } 00:17:44.612 } 00:17:44.612 } 00:17:44.612 ]' 00:17:44.612 14:12:47 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:17:44.612 14:12:47 -- common/autotest_common.sh@1372 -- # bs=4096 00:17:44.612 14:12:47 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:17:44.612 14:12:47 -- common/autotest_common.sh@1373 -- # nb=26476544 00:17:44.612 14:12:47 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:17:44.612 14:12:47 -- common/autotest_common.sh@1377 -- # echo 103424 00:17:44.612 14:12:47 -- ftl/common.sh@48 -- # cache_size=5171 00:17:44.612 14:12:47 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:44.873 14:12:47 -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:17:44.873 14:12:47 -- ftl/restore.sh@48 -- # get_bdev_size 48435efe-3581-408c-bb7e-1940f0f514fb 00:17:44.873 14:12:47 -- common/autotest_common.sh@1367 -- # local bdev_name=48435efe-3581-408c-bb7e-1940f0f514fb 00:17:44.873 14:12:47 -- common/autotest_common.sh@1368 -- # local bdev_info 00:17:44.873 14:12:47 -- common/autotest_common.sh@1369 -- # local bs 00:17:44.873 14:12:47 -- common/autotest_common.sh@1370 -- # local nb 00:17:44.873 14:12:47 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 48435efe-3581-408c-bb7e-1940f0f514fb 00:17:45.134 14:12:47 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:17:45.134 { 00:17:45.134 "name": "48435efe-3581-408c-bb7e-1940f0f514fb", 00:17:45.134 "aliases": [ 00:17:45.134 "lvs/nvme0n1p0" 00:17:45.134 ], 00:17:45.134 "product_name": "Logical Volume", 00:17:45.134 "block_size": 4096, 00:17:45.134 "num_blocks": 26476544, 00:17:45.134 "uuid": "48435efe-3581-408c-bb7e-1940f0f514fb", 00:17:45.134 "assigned_rate_limits": { 00:17:45.134 "rw_ios_per_sec": 0, 00:17:45.134 "rw_mbytes_per_sec": 0, 00:17:45.135 "r_mbytes_per_sec": 0, 00:17:45.135 "w_mbytes_per_sec": 0 00:17:45.135 }, 00:17:45.135 "claimed": false, 00:17:45.135 "zoned": false, 00:17:45.135 "supported_io_types": { 00:17:45.135 "read": true, 00:17:45.135 "write": true, 00:17:45.135 "unmap": true, 00:17:45.135 "write_zeroes": true, 00:17:45.135 "flush": false, 00:17:45.135 "reset": true, 00:17:45.135 "compare": false, 00:17:45.135 "compare_and_write": false, 00:17:45.135 "abort": false, 00:17:45.135 "nvme_admin": false, 00:17:45.135 "nvme_io": false 00:17:45.135 }, 00:17:45.135 "driver_specific": { 00:17:45.135 "lvol": { 00:17:45.135 "lvol_store_uuid": "9b65a27c-f327-42b2-9716-a1975e407e1d", 00:17:45.135 "base_bdev": "nvme0n1", 00:17:45.135 "thin_provision": true, 00:17:45.135 "snapshot": false, 00:17:45.135 "clone": false, 00:17:45.135 "esnap_clone": false 00:17:45.135 } 00:17:45.135 } 00:17:45.135 } 00:17:45.135 ]' 00:17:45.135 14:12:47 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:17:45.135 14:12:47 -- common/autotest_common.sh@1372 -- # bs=4096 00:17:45.135 14:12:47 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:17:45.135 14:12:47 -- common/autotest_common.sh@1373 -- # nb=26476544 00:17:45.135 14:12:47 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:17:45.135 14:12:47 -- common/autotest_common.sh@1377 -- # echo 103424 00:17:45.135 14:12:47 -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:17:45.135 14:12:47 -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 48435efe-3581-408c-bb7e-1940f0f514fb --l2p_dram_limit 10' 00:17:45.135 14:12:47 -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:17:45.135 14:12:47 -- ftl/restore.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:17:45.135 14:12:47 -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:17:45.135 14:12:47 -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:17:45.135 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:17:45.135 14:12:47 -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 48435efe-3581-408c-bb7e-1940f0f514fb --l2p_dram_limit 10 -c nvc0n1p0 00:17:45.394 [2024-12-08 14:12:48.086323] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.394 [2024-12-08 14:12:48.086389] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:45.394 [2024-12-08 14:12:48.086408] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:45.394 [2024-12-08 14:12:48.086419] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.394 [2024-12-08 14:12:48.086488] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.394 [2024-12-08 14:12:48.086499] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:45.394 [2024-12-08 14:12:48.086510] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:17:45.394 [2024-12-08 14:12:48.086519] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.394 [2024-12-08 14:12:48.086543] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:45.394 [2024-12-08 14:12:48.087434] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:45.394 [2024-12-08 14:12:48.087460] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.394 [2024-12-08 14:12:48.087468] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:45.394 [2024-12-08 14:12:48.087480] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.918 ms 00:17:45.394 [2024-12-08 14:12:48.087488] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.394 [2024-12-08 14:12:48.087530] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID aa53147e-1f9e-4e05-b079-7ab61488c7ac 00:17:45.394 [2024-12-08 14:12:48.089397] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.394 [2024-12-08 14:12:48.089454] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:45.394 [2024-12-08 14:12:48.089466] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:17:45.394 [2024-12-08 14:12:48.089476] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.394 [2024-12-08 14:12:48.097924] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.394 [2024-12-08 14:12:48.097972] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:45.394 [2024-12-08 14:12:48.097996] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.347 ms 00:17:45.394 [2024-12-08 14:12:48.098005] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.394 [2024-12-08 14:12:48.098100] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.394 [2024-12-08 14:12:48.098111] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:45.394 [2024-12-08 14:12:48.098118] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:17:45.394 [2024-12-08 14:12:48.098130] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.394 [2024-12-08 14:12:48.098187] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.394 [2024-12-08 14:12:48.098199] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:45.394 [2024-12-08 14:12:48.098206] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:45.394 [2024-12-08 14:12:48.098215] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.394 [2024-12-08 14:12:48.098237] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:45.394 [2024-12-08 14:12:48.101965] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.394 [2024-12-08 14:12:48.102010] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:45.394 [2024-12-08 14:12:48.102021] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.733 ms 00:17:45.394 [2024-12-08 14:12:48.102028] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.394 [2024-12-08 14:12:48.102067] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.394 [2024-12-08 14:12:48.102075] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:45.394 [2024-12-08 14:12:48.102084] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:45.394 [2024-12-08 14:12:48.102090] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.394 [2024-12-08 14:12:48.102120] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:45.394 [2024-12-08 14:12:48.102218] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:45.394 [2024-12-08 14:12:48.102232] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:45.394 [2024-12-08 14:12:48.102242] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:45.394 [2024-12-08 14:12:48.102252] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:45.394 [2024-12-08 14:12:48.102259] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:45.394 [2024-12-08 14:12:48.102269] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:45.394 [2024-12-08 14:12:48.102284] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:45.394 [2024-12-08 14:12:48.102292] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:45.394 [2024-12-08 14:12:48.102298] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:45.394 [2024-12-08 14:12:48.102306] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.394 [2024-12-08 14:12:48.102313] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:45.394 [2024-12-08 14:12:48.102321] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.187 ms 00:17:45.394 [2024-12-08 14:12:48.102327] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.394 [2024-12-08 14:12:48.102378] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.394 [2024-12-08 14:12:48.102385] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:45.394 [2024-12-08 14:12:48.102392] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:17:45.394 [2024-12-08 14:12:48.102400] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.394 [2024-12-08 14:12:48.102459] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:45.394 [2024-12-08 14:12:48.102468] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:45.394 [2024-12-08 14:12:48.102476] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:45.394 [2024-12-08 14:12:48.102483] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:45.394 [2024-12-08 14:12:48.102491] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:45.394 [2024-12-08 14:12:48.102496] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:45.394 [2024-12-08 14:12:48.102503] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:45.394 [2024-12-08 14:12:48.102508] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:45.394 [2024-12-08 14:12:48.102515] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:45.394 [2024-12-08 14:12:48.102521] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:45.394 [2024-12-08 14:12:48.102530] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:45.394 [2024-12-08 14:12:48.102535] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:45.394 [2024-12-08 14:12:48.102543] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:45.394 [2024-12-08 14:12:48.102548] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:45.394 [2024-12-08 14:12:48.102555] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:17:45.394 [2024-12-08 14:12:48.102560] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:45.394 [2024-12-08 14:12:48.102569] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:45.394 [2024-12-08 14:12:48.102575] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:17:45.394 [2024-12-08 14:12:48.102583] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:45.394 [2024-12-08 14:12:48.102589] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:45.394 [2024-12-08 14:12:48.102596] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:17:45.394 [2024-12-08 14:12:48.102602] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:45.394 [2024-12-08 14:12:48.102609] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:45.394 [2024-12-08 14:12:48.102614] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:45.394 [2024-12-08 14:12:48.102621] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:45.394 [2024-12-08 14:12:48.102626] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:45.394 [2024-12-08 14:12:48.102633] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:17:45.394 [2024-12-08 14:12:48.102638] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:45.394 [2024-12-08 14:12:48.102645] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:45.394 [2024-12-08 14:12:48.102650] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:45.394 [2024-12-08 14:12:48.102656] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:45.394 [2024-12-08 14:12:48.102661] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:45.394 [2024-12-08 14:12:48.102670] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:17:45.394 [2024-12-08 14:12:48.102675] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:45.394 [2024-12-08 14:12:48.102682] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:45.394 [2024-12-08 14:12:48.102687] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:45.394 [2024-12-08 14:12:48.102693] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:45.394 [2024-12-08 14:12:48.102698] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:45.394 [2024-12-08 14:12:48.102706] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:17:45.394 [2024-12-08 14:12:48.102711] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:45.394 [2024-12-08 14:12:48.102718] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:45.394 [2024-12-08 14:12:48.102723] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:45.394 [2024-12-08 14:12:48.102731] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:45.394 [2024-12-08 14:12:48.102737] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:45.394 [2024-12-08 14:12:48.102746] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:45.394 [2024-12-08 14:12:48.102751] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:45.394 [2024-12-08 14:12:48.102758] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:45.394 [2024-12-08 14:12:48.102764] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:45.394 [2024-12-08 14:12:48.102772] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:45.394 [2024-12-08 14:12:48.102777] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:45.394 [2024-12-08 14:12:48.102785] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:45.394 [2024-12-08 14:12:48.102795] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:45.394 [2024-12-08 14:12:48.102804] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:45.394 [2024-12-08 14:12:48.102810] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:17:45.394 [2024-12-08 14:12:48.102817] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:17:45.394 [2024-12-08 14:12:48.102823] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:17:45.394 [2024-12-08 14:12:48.102830] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:17:45.394 [2024-12-08 14:12:48.102835] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:17:45.394 [2024-12-08 14:12:48.102842] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:17:45.394 [2024-12-08 14:12:48.102848] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:17:45.394 [2024-12-08 14:12:48.102855] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:17:45.394 [2024-12-08 14:12:48.102860] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:17:45.394 [2024-12-08 14:12:48.102868] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:17:45.394 [2024-12-08 14:12:48.102873] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:17:45.394 [2024-12-08 14:12:48.102883] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:17:45.394 [2024-12-08 14:12:48.102888] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:45.394 [2024-12-08 14:12:48.102897] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:45.394 [2024-12-08 14:12:48.102903] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:45.394 [2024-12-08 14:12:48.102910] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:45.394 [2024-12-08 14:12:48.102915] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:45.394 [2024-12-08 14:12:48.102922] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:45.394 [2024-12-08 14:12:48.102928] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.394 [2024-12-08 14:12:48.102936] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:45.394 [2024-12-08 14:12:48.102942] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.506 ms 00:17:45.394 [2024-12-08 14:12:48.102949] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.394 [2024-12-08 14:12:48.118064] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.394 [2024-12-08 14:12:48.118105] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:45.394 [2024-12-08 14:12:48.118114] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.083 ms 00:17:45.394 [2024-12-08 14:12:48.118123] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.394 [2024-12-08 14:12:48.118200] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.394 [2024-12-08 14:12:48.118211] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:45.394 [2024-12-08 14:12:48.118221] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:17:45.394 [2024-12-08 14:12:48.118229] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.394 [2024-12-08 14:12:48.145018] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.394 [2024-12-08 14:12:48.145054] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:45.394 [2024-12-08 14:12:48.145062] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.747 ms 00:17:45.394 [2024-12-08 14:12:48.145071] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.394 [2024-12-08 14:12:48.145098] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.394 [2024-12-08 14:12:48.145108] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:45.394 [2024-12-08 14:12:48.145116] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:45.394 [2024-12-08 14:12:48.145124] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.394 [2024-12-08 14:12:48.145513] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.394 [2024-12-08 14:12:48.145532] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:45.394 [2024-12-08 14:12:48.145540] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.352 ms 00:17:45.394 [2024-12-08 14:12:48.145547] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.394 [2024-12-08 14:12:48.145642] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.394 [2024-12-08 14:12:48.145653] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:45.394 [2024-12-08 14:12:48.145659] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:17:45.394 [2024-12-08 14:12:48.145666] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.394 [2024-12-08 14:12:48.158997] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.394 [2024-12-08 14:12:48.159029] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:45.394 [2024-12-08 14:12:48.159037] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.316 ms 00:17:45.394 [2024-12-08 14:12:48.159044] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.394 [2024-12-08 14:12:48.168767] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:45.394 [2024-12-08 14:12:48.171334] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.394 [2024-12-08 14:12:48.171362] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:45.394 [2024-12-08 14:12:48.171371] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.224 ms 00:17:45.394 [2024-12-08 14:12:48.171377] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.394 [2024-12-08 14:12:48.235102] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.394 [2024-12-08 14:12:48.235137] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:45.394 [2024-12-08 14:12:48.235149] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.700 ms 00:17:45.394 [2024-12-08 14:12:48.235155] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.394 [2024-12-08 14:12:48.235191] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:17:45.394 [2024-12-08 14:12:48.235199] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:17:48.696 [2024-12-08 14:12:51.466421] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.696 [2024-12-08 14:12:51.466516] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:48.696 [2024-12-08 14:12:51.466539] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3231.205 ms 00:17:48.696 [2024-12-08 14:12:51.466549] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.696 [2024-12-08 14:12:51.466786] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.696 [2024-12-08 14:12:51.466799] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:48.696 [2024-12-08 14:12:51.466816] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.170 ms 00:17:48.696 [2024-12-08 14:12:51.466825] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.696 [2024-12-08 14:12:51.494306] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.696 [2024-12-08 14:12:51.494361] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:48.696 [2024-12-08 14:12:51.494387] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.417 ms 00:17:48.696 [2024-12-08 14:12:51.494396] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.696 [2024-12-08 14:12:51.520730] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.696 [2024-12-08 14:12:51.520785] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:48.696 [2024-12-08 14:12:51.520804] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.269 ms 00:17:48.696 [2024-12-08 14:12:51.520811] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.696 [2024-12-08 14:12:51.521213] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.696 [2024-12-08 14:12:51.521237] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:48.696 [2024-12-08 14:12:51.521256] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.346 ms 00:17:48.696 [2024-12-08 14:12:51.521268] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.696 [2024-12-08 14:12:51.594407] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.696 [2024-12-08 14:12:51.594467] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:48.696 [2024-12-08 14:12:51.594486] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.052 ms 00:17:48.696 [2024-12-08 14:12:51.594495] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.957 [2024-12-08 14:12:51.623324] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.958 [2024-12-08 14:12:51.623387] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:48.958 [2024-12-08 14:12:51.623403] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.764 ms 00:17:48.958 [2024-12-08 14:12:51.623411] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.958 [2024-12-08 14:12:51.624904] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.958 [2024-12-08 14:12:51.624960] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:48.958 [2024-12-08 14:12:51.624977] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.432 ms 00:17:48.958 [2024-12-08 14:12:51.625009] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.958 [2024-12-08 14:12:51.652834] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.958 [2024-12-08 14:12:51.652890] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:48.958 [2024-12-08 14:12:51.652906] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.753 ms 00:17:48.958 [2024-12-08 14:12:51.652914] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.958 [2024-12-08 14:12:51.653002] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.958 [2024-12-08 14:12:51.653014] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:48.958 [2024-12-08 14:12:51.653025] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:48.958 [2024-12-08 14:12:51.653034] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.958 [2024-12-08 14:12:51.653151] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.958 [2024-12-08 14:12:51.653162] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:48.958 [2024-12-08 14:12:51.653187] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:17:48.958 [2024-12-08 14:12:51.653199] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.958 [2024-12-08 14:12:51.654387] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3567.539 ms, result 0 00:17:48.958 { 00:17:48.958 "name": "ftl0", 00:17:48.958 "uuid": "aa53147e-1f9e-4e05-b079-7ab61488c7ac" 00:17:48.958 } 00:17:48.958 14:12:51 -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:17:48.958 14:12:51 -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:49.217 14:12:51 -- ftl/restore.sh@63 -- # echo ']}' 00:17:49.217 14:12:51 -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:49.217 [2024-12-08 14:12:52.065757] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.217 [2024-12-08 14:12:52.065828] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:49.217 [2024-12-08 14:12:52.065842] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:49.217 [2024-12-08 14:12:52.065853] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.217 [2024-12-08 14:12:52.065880] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:49.217 [2024-12-08 14:12:52.068809] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.218 [2024-12-08 14:12:52.068852] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:49.218 [2024-12-08 14:12:52.068867] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.904 ms 00:17:49.218 [2024-12-08 14:12:52.068883] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.218 [2024-12-08 14:12:52.069212] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.218 [2024-12-08 14:12:52.069232] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:49.218 [2024-12-08 14:12:52.069250] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:17:49.218 [2024-12-08 14:12:52.069261] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.218 [2024-12-08 14:12:52.072539] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.218 [2024-12-08 14:12:52.072563] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:49.218 [2024-12-08 14:12:52.072576] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.251 ms 00:17:49.218 [2024-12-08 14:12:52.072584] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.218 [2024-12-08 14:12:52.078856] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.218 [2024-12-08 14:12:52.078903] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:49.218 [2024-12-08 14:12:52.078917] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.242 ms 00:17:49.218 [2024-12-08 14:12:52.078925] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.218 [2024-12-08 14:12:52.107067] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.218 [2024-12-08 14:12:52.107118] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:49.218 [2024-12-08 14:12:52.107134] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.009 ms 00:17:49.218 [2024-12-08 14:12:52.107142] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.218 [2024-12-08 14:12:52.126140] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.218 [2024-12-08 14:12:52.126353] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:49.218 [2024-12-08 14:12:52.126386] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.934 ms 00:17:49.218 [2024-12-08 14:12:52.126396] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.218 [2024-12-08 14:12:52.126642] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.218 [2024-12-08 14:12:52.126656] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:49.218 [2024-12-08 14:12:52.126668] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:17:49.218 [2024-12-08 14:12:52.126678] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.480 [2024-12-08 14:12:52.153945] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.480 [2024-12-08 14:12:52.154160] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:49.480 [2024-12-08 14:12:52.154189] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.236 ms 00:17:49.480 [2024-12-08 14:12:52.154197] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.480 [2024-12-08 14:12:52.180788] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.480 [2024-12-08 14:12:52.180839] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:49.480 [2024-12-08 14:12:52.180854] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.537 ms 00:17:49.480 [2024-12-08 14:12:52.180861] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.480 [2024-12-08 14:12:52.207234] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.480 [2024-12-08 14:12:52.207289] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:49.480 [2024-12-08 14:12:52.207304] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.311 ms 00:17:49.480 [2024-12-08 14:12:52.207311] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.480 [2024-12-08 14:12:52.233341] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.480 [2024-12-08 14:12:52.233391] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:49.480 [2024-12-08 14:12:52.233405] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.908 ms 00:17:49.480 [2024-12-08 14:12:52.233412] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.480 [2024-12-08 14:12:52.233469] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:49.480 [2024-12-08 14:12:52.233489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:49.480 [2024-12-08 14:12:52.233503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:49.480 [2024-12-08 14:12:52.233511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:49.480 [2024-12-08 14:12:52.233522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:49.480 [2024-12-08 14:12:52.233530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:49.480 [2024-12-08 14:12:52.233541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:49.480 [2024-12-08 14:12:52.233549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:49.480 [2024-12-08 14:12:52.233559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:49.480 [2024-12-08 14:12:52.233566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:49.480 [2024-12-08 14:12:52.233576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:49.480 [2024-12-08 14:12:52.233583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:49.480 [2024-12-08 14:12:52.233593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:49.480 [2024-12-08 14:12:52.233600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:49.480 [2024-12-08 14:12:52.233610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:49.480 [2024-12-08 14:12:52.233617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:49.480 [2024-12-08 14:12:52.233629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:49.480 [2024-12-08 14:12:52.233636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:49.480 [2024-12-08 14:12:52.233648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:49.480 [2024-12-08 14:12:52.233655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:49.480 [2024-12-08 14:12:52.233665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:49.480 [2024-12-08 14:12:52.233673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:49.480 [2024-12-08 14:12:52.233683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:49.480 [2024-12-08 14:12:52.233690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:49.480 [2024-12-08 14:12:52.233699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:49.480 [2024-12-08 14:12:52.233707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.233716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.233726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.233735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.233743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.233753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.233763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.233775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.233783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.233793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.233800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.233810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.233818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.233827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.233835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.233844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.233852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.233862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.233870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.233879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.233886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.233897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.233905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.233916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.233924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.233933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.233940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.233949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.233956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.233966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.233973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.234007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.234015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.234025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.234036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.234046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.234054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.234064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.234080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.234093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.234101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.234112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.234119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.234130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.234137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.234148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.234156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.234166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.234173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.234182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.234189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.234199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.234206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.234216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.234224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.234236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.234243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.234253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.234260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.234270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.234278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.234288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.234295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.234304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.234311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.234321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.234329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.234341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.234349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.234375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.234383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.234396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.234404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.234413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.234421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.234431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:49.481 [2024-12-08 14:12:52.234446] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:49.481 [2024-12-08 14:12:52.234456] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: aa53147e-1f9e-4e05-b079-7ab61488c7ac 00:17:49.481 [2024-12-08 14:12:52.234464] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:49.481 [2024-12-08 14:12:52.234474] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:49.481 [2024-12-08 14:12:52.234482] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:49.481 [2024-12-08 14:12:52.234492] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:49.481 [2024-12-08 14:12:52.234499] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:49.481 [2024-12-08 14:12:52.234509] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:49.481 [2024-12-08 14:12:52.234517] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:49.481 [2024-12-08 14:12:52.234525] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:49.481 [2024-12-08 14:12:52.234532] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:49.481 [2024-12-08 14:12:52.234544] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.481 [2024-12-08 14:12:52.234551] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:49.481 [2024-12-08 14:12:52.234563] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.078 ms 00:17:49.481 [2024-12-08 14:12:52.234570] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.481 [2024-12-08 14:12:52.248762] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.481 [2024-12-08 14:12:52.248958] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:49.481 [2024-12-08 14:12:52.249004] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.129 ms 00:17:49.482 [2024-12-08 14:12:52.249014] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.482 [2024-12-08 14:12:52.249304] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.482 [2024-12-08 14:12:52.249325] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:49.482 [2024-12-08 14:12:52.249342] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:17:49.482 [2024-12-08 14:12:52.249354] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.482 [2024-12-08 14:12:52.299940] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.482 [2024-12-08 14:12:52.300162] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:49.482 [2024-12-08 14:12:52.300192] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.482 [2024-12-08 14:12:52.300201] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.482 [2024-12-08 14:12:52.300286] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.482 [2024-12-08 14:12:52.300297] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:49.482 [2024-12-08 14:12:52.300314] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.482 [2024-12-08 14:12:52.300322] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.482 [2024-12-08 14:12:52.300412] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.482 [2024-12-08 14:12:52.300422] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:49.482 [2024-12-08 14:12:52.300432] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.482 [2024-12-08 14:12:52.300440] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.482 [2024-12-08 14:12:52.300460] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.482 [2024-12-08 14:12:52.300469] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:49.482 [2024-12-08 14:12:52.300482] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.482 [2024-12-08 14:12:52.300490] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.482 [2024-12-08 14:12:52.387606] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.482 [2024-12-08 14:12:52.387811] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:49.482 [2024-12-08 14:12:52.387839] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.482 [2024-12-08 14:12:52.387848] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.743 [2024-12-08 14:12:52.421462] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.743 [2024-12-08 14:12:52.421518] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:49.743 [2024-12-08 14:12:52.421531] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.743 [2024-12-08 14:12:52.421540] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.743 [2024-12-08 14:12:52.421629] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.743 [2024-12-08 14:12:52.421639] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:49.743 [2024-12-08 14:12:52.421649] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.743 [2024-12-08 14:12:52.421657] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.743 [2024-12-08 14:12:52.421708] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.743 [2024-12-08 14:12:52.421718] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:49.743 [2024-12-08 14:12:52.421728] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.743 [2024-12-08 14:12:52.421738] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.743 [2024-12-08 14:12:52.421845] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.743 [2024-12-08 14:12:52.421856] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:49.743 [2024-12-08 14:12:52.421866] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.743 [2024-12-08 14:12:52.421874] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.743 [2024-12-08 14:12:52.421912] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.743 [2024-12-08 14:12:52.421922] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:49.743 [2024-12-08 14:12:52.421934] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.743 [2024-12-08 14:12:52.421941] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.743 [2024-12-08 14:12:52.422123] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.743 [2024-12-08 14:12:52.422139] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:49.743 [2024-12-08 14:12:52.422150] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.743 [2024-12-08 14:12:52.422158] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.743 [2024-12-08 14:12:52.422219] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.743 [2024-12-08 14:12:52.422229] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:49.743 [2024-12-08 14:12:52.422239] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.743 [2024-12-08 14:12:52.422249] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.743 [2024-12-08 14:12:52.422404] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 356.599 ms, result 0 00:17:49.743 true 00:17:49.743 14:12:52 -- ftl/restore.sh@66 -- # killprocess 72823 00:17:49.743 14:12:52 -- common/autotest_common.sh@936 -- # '[' -z 72823 ']' 00:17:49.743 14:12:52 -- common/autotest_common.sh@940 -- # kill -0 72823 00:17:49.743 14:12:52 -- common/autotest_common.sh@941 -- # uname 00:17:49.743 14:12:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:49.743 14:12:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72823 00:17:49.743 killing process with pid 72823 00:17:49.743 14:12:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:49.743 14:12:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:49.743 14:12:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72823' 00:17:49.743 14:12:52 -- common/autotest_common.sh@955 -- # kill 72823 00:17:49.743 14:12:52 -- common/autotest_common.sh@960 -- # wait 72823 00:17:56.339 14:12:58 -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:17:59.628 262144+0 records in 00:17:59.628 262144+0 records out 00:17:59.628 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.85791 s, 278 MB/s 00:17:59.628 14:13:02 -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:18:01.005 14:13:03 -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:01.005 [2024-12-08 14:13:03.807622] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:01.005 [2024-12-08 14:13:03.807836] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73061 ] 00:18:01.268 [2024-12-08 14:13:03.950041] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.268 [2024-12-08 14:13:04.163430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:01.842 [2024-12-08 14:13:04.453931] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:01.842 [2024-12-08 14:13:04.454045] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:01.842 [2024-12-08 14:13:04.610081] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.842 [2024-12-08 14:13:04.610143] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:01.842 [2024-12-08 14:13:04.610157] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:01.842 [2024-12-08 14:13:04.610169] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.842 [2024-12-08 14:13:04.610223] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.842 [2024-12-08 14:13:04.610233] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:01.842 [2024-12-08 14:13:04.610242] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:18:01.842 [2024-12-08 14:13:04.610250] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.842 [2024-12-08 14:13:04.610270] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:01.842 [2024-12-08 14:13:04.611058] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:01.842 [2024-12-08 14:13:04.611077] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.842 [2024-12-08 14:13:04.611085] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:01.842 [2024-12-08 14:13:04.611095] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.811 ms 00:18:01.842 [2024-12-08 14:13:04.611103] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.842 [2024-12-08 14:13:04.612836] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:01.842 [2024-12-08 14:13:04.627846] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.842 [2024-12-08 14:13:04.627902] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:01.842 [2024-12-08 14:13:04.627917] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.012 ms 00:18:01.842 [2024-12-08 14:13:04.627925] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.842 [2024-12-08 14:13:04.628033] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.842 [2024-12-08 14:13:04.628044] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:01.842 [2024-12-08 14:13:04.628053] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:18:01.842 [2024-12-08 14:13:04.628061] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.842 [2024-12-08 14:13:04.636709] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.842 [2024-12-08 14:13:04.636760] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:01.842 [2024-12-08 14:13:04.636770] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.540 ms 00:18:01.842 [2024-12-08 14:13:04.636778] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.842 [2024-12-08 14:13:04.636879] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.842 [2024-12-08 14:13:04.636889] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:01.842 [2024-12-08 14:13:04.636899] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:18:01.842 [2024-12-08 14:13:04.636908] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.842 [2024-12-08 14:13:04.636956] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.842 [2024-12-08 14:13:04.636965] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:01.842 [2024-12-08 14:13:04.636973] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:01.842 [2024-12-08 14:13:04.637015] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.842 [2024-12-08 14:13:04.637049] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:01.842 [2024-12-08 14:13:04.641398] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.842 [2024-12-08 14:13:04.641442] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:01.842 [2024-12-08 14:13:04.641453] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.363 ms 00:18:01.842 [2024-12-08 14:13:04.641461] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.842 [2024-12-08 14:13:04.641501] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.842 [2024-12-08 14:13:04.641509] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:01.842 [2024-12-08 14:13:04.641518] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:01.842 [2024-12-08 14:13:04.641529] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.842 [2024-12-08 14:13:04.641583] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:01.842 [2024-12-08 14:13:04.641606] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:18:01.842 [2024-12-08 14:13:04.641642] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:01.842 [2024-12-08 14:13:04.641658] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:18:01.842 [2024-12-08 14:13:04.641734] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:18:01.842 [2024-12-08 14:13:04.641744] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:01.842 [2024-12-08 14:13:04.641758] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:18:01.842 [2024-12-08 14:13:04.641768] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:01.842 [2024-12-08 14:13:04.641777] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:01.842 [2024-12-08 14:13:04.641786] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:01.842 [2024-12-08 14:13:04.641794] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:01.842 [2024-12-08 14:13:04.641802] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:18:01.842 [2024-12-08 14:13:04.641811] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:18:01.842 [2024-12-08 14:13:04.641820] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.842 [2024-12-08 14:13:04.641828] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:01.842 [2024-12-08 14:13:04.641836] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.240 ms 00:18:01.842 [2024-12-08 14:13:04.641843] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.842 [2024-12-08 14:13:04.641905] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.842 [2024-12-08 14:13:04.641914] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:01.842 [2024-12-08 14:13:04.641923] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:18:01.842 [2024-12-08 14:13:04.641929] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.842 [2024-12-08 14:13:04.642030] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:01.842 [2024-12-08 14:13:04.642042] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:01.842 [2024-12-08 14:13:04.642051] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:01.842 [2024-12-08 14:13:04.642059] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:01.842 [2024-12-08 14:13:04.642068] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:01.842 [2024-12-08 14:13:04.642075] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:01.842 [2024-12-08 14:13:04.642083] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:01.842 [2024-12-08 14:13:04.642091] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:01.842 [2024-12-08 14:13:04.642099] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:01.842 [2024-12-08 14:13:04.642107] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:01.842 [2024-12-08 14:13:04.642116] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:01.842 [2024-12-08 14:13:04.642123] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:01.842 [2024-12-08 14:13:04.642130] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:01.843 [2024-12-08 14:13:04.642138] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:01.843 [2024-12-08 14:13:04.642144] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:18:01.843 [2024-12-08 14:13:04.642151] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:01.843 [2024-12-08 14:13:04.642166] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:01.843 [2024-12-08 14:13:04.642173] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:18:01.843 [2024-12-08 14:13:04.642180] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:01.843 [2024-12-08 14:13:04.642186] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:18:01.843 [2024-12-08 14:13:04.642193] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:18:01.843 [2024-12-08 14:13:04.642200] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:18:01.843 [2024-12-08 14:13:04.642206] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:01.843 [2024-12-08 14:13:04.642213] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:01.843 [2024-12-08 14:13:04.642219] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:01.843 [2024-12-08 14:13:04.642226] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:01.843 [2024-12-08 14:13:04.642233] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:18:01.843 [2024-12-08 14:13:04.642239] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:01.843 [2024-12-08 14:13:04.642245] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:01.843 [2024-12-08 14:13:04.642252] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:01.843 [2024-12-08 14:13:04.642259] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:01.843 [2024-12-08 14:13:04.642266] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:01.843 [2024-12-08 14:13:04.642272] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:18:01.843 [2024-12-08 14:13:04.642278] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:01.843 [2024-12-08 14:13:04.642284] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:01.843 [2024-12-08 14:13:04.642291] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:01.843 [2024-12-08 14:13:04.642297] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:01.843 [2024-12-08 14:13:04.642305] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:01.843 [2024-12-08 14:13:04.642312] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:18:01.843 [2024-12-08 14:13:04.642318] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:01.843 [2024-12-08 14:13:04.642332] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:01.843 [2024-12-08 14:13:04.642342] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:01.843 [2024-12-08 14:13:04.642352] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:01.843 [2024-12-08 14:13:04.642360] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:01.843 [2024-12-08 14:13:04.642368] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:01.843 [2024-12-08 14:13:04.642374] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:01.843 [2024-12-08 14:13:04.642382] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:01.843 [2024-12-08 14:13:04.642389] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:01.843 [2024-12-08 14:13:04.642396] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:01.843 [2024-12-08 14:13:04.642403] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:01.843 [2024-12-08 14:13:04.642411] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:01.843 [2024-12-08 14:13:04.642422] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:01.843 [2024-12-08 14:13:04.642431] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:01.843 [2024-12-08 14:13:04.642440] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:18:01.843 [2024-12-08 14:13:04.642447] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:18:01.843 [2024-12-08 14:13:04.642455] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:18:01.843 [2024-12-08 14:13:04.642463] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:18:01.843 [2024-12-08 14:13:04.642470] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:18:01.843 [2024-12-08 14:13:04.642477] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:18:01.843 [2024-12-08 14:13:04.642484] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:18:01.843 [2024-12-08 14:13:04.642491] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:18:01.843 [2024-12-08 14:13:04.642498] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:18:01.843 [2024-12-08 14:13:04.642505] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:18:01.843 [2024-12-08 14:13:04.642512] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:18:01.843 [2024-12-08 14:13:04.642520] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:18:01.843 [2024-12-08 14:13:04.642528] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:01.843 [2024-12-08 14:13:04.642535] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:01.843 [2024-12-08 14:13:04.642543] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:01.843 [2024-12-08 14:13:04.642551] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:01.843 [2024-12-08 14:13:04.642558] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:01.843 [2024-12-08 14:13:04.642565] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:01.843 [2024-12-08 14:13:04.642572] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.843 [2024-12-08 14:13:04.642579] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:01.843 [2024-12-08 14:13:04.642586] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.615 ms 00:18:01.843 [2024-12-08 14:13:04.642596] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.843 [2024-12-08 14:13:04.661281] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.843 [2024-12-08 14:13:04.661330] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:01.843 [2024-12-08 14:13:04.661341] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.643 ms 00:18:01.843 [2024-12-08 14:13:04.661356] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.843 [2024-12-08 14:13:04.661450] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.843 [2024-12-08 14:13:04.661459] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:01.843 [2024-12-08 14:13:04.661467] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:18:01.843 [2024-12-08 14:13:04.661475] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.843 [2024-12-08 14:13:04.707341] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.843 [2024-12-08 14:13:04.707564] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:01.843 [2024-12-08 14:13:04.707587] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.810 ms 00:18:01.843 [2024-12-08 14:13:04.707596] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.843 [2024-12-08 14:13:04.707651] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.843 [2024-12-08 14:13:04.707661] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:01.843 [2024-12-08 14:13:04.707670] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:01.843 [2024-12-08 14:13:04.707678] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.843 [2024-12-08 14:13:04.708278] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.843 [2024-12-08 14:13:04.708313] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:01.843 [2024-12-08 14:13:04.708325] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.545 ms 00:18:01.843 [2024-12-08 14:13:04.708341] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.843 [2024-12-08 14:13:04.708473] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.843 [2024-12-08 14:13:04.708482] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:01.843 [2024-12-08 14:13:04.708491] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:18:01.843 [2024-12-08 14:13:04.708500] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.843 [2024-12-08 14:13:04.725419] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.843 [2024-12-08 14:13:04.725466] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:01.843 [2024-12-08 14:13:04.725478] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.893 ms 00:18:01.843 [2024-12-08 14:13:04.725487] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.843 [2024-12-08 14:13:04.740418] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:01.843 [2024-12-08 14:13:04.740476] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:01.843 [2024-12-08 14:13:04.740490] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.843 [2024-12-08 14:13:04.740499] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:01.843 [2024-12-08 14:13:04.740509] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.889 ms 00:18:01.843 [2024-12-08 14:13:04.740516] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.106 [2024-12-08 14:13:04.767413] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.106 [2024-12-08 14:13:04.767464] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:02.106 [2024-12-08 14:13:04.767477] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.837 ms 00:18:02.106 [2024-12-08 14:13:04.767485] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.106 [2024-12-08 14:13:04.781130] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.106 [2024-12-08 14:13:04.781208] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:02.106 [2024-12-08 14:13:04.781222] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.587 ms 00:18:02.106 [2024-12-08 14:13:04.781230] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.106 [2024-12-08 14:13:04.794525] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.106 [2024-12-08 14:13:04.794724] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:02.106 [2024-12-08 14:13:04.794759] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.245 ms 00:18:02.106 [2024-12-08 14:13:04.794767] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.106 [2024-12-08 14:13:04.795187] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.106 [2024-12-08 14:13:04.795202] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:02.106 [2024-12-08 14:13:04.795212] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:18:02.106 [2024-12-08 14:13:04.795220] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.106 [2024-12-08 14:13:04.864015] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.106 [2024-12-08 14:13:04.864079] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:02.106 [2024-12-08 14:13:04.864094] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.777 ms 00:18:02.106 [2024-12-08 14:13:04.864103] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.106 [2024-12-08 14:13:04.875936] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:02.106 [2024-12-08 14:13:04.879347] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.106 [2024-12-08 14:13:04.879548] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:02.106 [2024-12-08 14:13:04.879569] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.180 ms 00:18:02.106 [2024-12-08 14:13:04.879578] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.106 [2024-12-08 14:13:04.879667] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.106 [2024-12-08 14:13:04.879677] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:02.106 [2024-12-08 14:13:04.879687] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:02.106 [2024-12-08 14:13:04.879695] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.106 [2024-12-08 14:13:04.879762] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.106 [2024-12-08 14:13:04.879772] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:02.106 [2024-12-08 14:13:04.879781] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:18:02.106 [2024-12-08 14:13:04.879789] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.106 [2024-12-08 14:13:04.881196] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.106 [2024-12-08 14:13:04.881244] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:18:02.106 [2024-12-08 14:13:04.881255] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.390 ms 00:18:02.106 [2024-12-08 14:13:04.881263] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.106 [2024-12-08 14:13:04.881302] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.106 [2024-12-08 14:13:04.881310] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:02.106 [2024-12-08 14:13:04.881319] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:02.106 [2024-12-08 14:13:04.881333] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.106 [2024-12-08 14:13:04.881371] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:02.106 [2024-12-08 14:13:04.881381] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.106 [2024-12-08 14:13:04.881390] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:02.106 [2024-12-08 14:13:04.881400] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:02.106 [2024-12-08 14:13:04.881408] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.106 [2024-12-08 14:13:04.908309] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.106 [2024-12-08 14:13:04.908379] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:02.106 [2024-12-08 14:13:04.908393] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.880 ms 00:18:02.106 [2024-12-08 14:13:04.908401] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.106 [2024-12-08 14:13:04.908490] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.106 [2024-12-08 14:13:04.908508] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:02.106 [2024-12-08 14:13:04.908519] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:18:02.106 [2024-12-08 14:13:04.908527] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.106 [2024-12-08 14:13:04.909834] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 299.264 ms, result 0 00:18:03.054  [2024-12-08T14:13:07.359Z] Copying: 11/1024 [MB] (11 MBps) [2024-12-08T14:13:07.930Z] Copying: 25/1024 [MB] (14 MBps) [2024-12-08T14:13:09.313Z] Copying: 47/1024 [MB] (21 MBps) [2024-12-08T14:13:10.250Z] Copying: 84/1024 [MB] (37 MBps) [2024-12-08T14:13:11.200Z] Copying: 114/1024 [MB] (29 MBps) [2024-12-08T14:13:12.142Z] Copying: 139/1024 [MB] (25 MBps) [2024-12-08T14:13:13.085Z] Copying: 162/1024 [MB] (22 MBps) [2024-12-08T14:13:14.024Z] Copying: 183/1024 [MB] (21 MBps) [2024-12-08T14:13:14.956Z] Copying: 199/1024 [MB] (16 MBps) [2024-12-08T14:13:16.372Z] Copying: 236/1024 [MB] (36 MBps) [2024-12-08T14:13:16.941Z] Copying: 264/1024 [MB] (27 MBps) [2024-12-08T14:13:18.328Z] Copying: 293/1024 [MB] (29 MBps) [2024-12-08T14:13:19.269Z] Copying: 315/1024 [MB] (21 MBps) [2024-12-08T14:13:20.210Z] Copying: 338/1024 [MB] (22 MBps) [2024-12-08T14:13:21.167Z] Copying: 362/1024 [MB] (24 MBps) [2024-12-08T14:13:22.103Z] Copying: 384/1024 [MB] (22 MBps) [2024-12-08T14:13:23.043Z] Copying: 409/1024 [MB] (24 MBps) [2024-12-08T14:13:23.986Z] Copying: 441/1024 [MB] (31 MBps) [2024-12-08T14:13:24.928Z] Copying: 461/1024 [MB] (20 MBps) [2024-12-08T14:13:26.314Z] Copying: 474/1024 [MB] (12 MBps) [2024-12-08T14:13:27.260Z] Copying: 491/1024 [MB] (16 MBps) [2024-12-08T14:13:28.204Z] Copying: 513/1024 [MB] (21 MBps) [2024-12-08T14:13:29.145Z] Copying: 530/1024 [MB] (17 MBps) [2024-12-08T14:13:30.085Z] Copying: 549/1024 [MB] (18 MBps) [2024-12-08T14:13:31.027Z] Copying: 560/1024 [MB] (11 MBps) [2024-12-08T14:13:31.974Z] Copying: 577/1024 [MB] (16 MBps) [2024-12-08T14:13:33.362Z] Copying: 595/1024 [MB] (18 MBps) [2024-12-08T14:13:33.936Z] Copying: 611/1024 [MB] (16 MBps) [2024-12-08T14:13:35.319Z] Copying: 624/1024 [MB] (12 MBps) [2024-12-08T14:13:36.251Z] Copying: 635/1024 [MB] (10 MBps) [2024-12-08T14:13:37.209Z] Copying: 661/1024 [MB] (25 MBps) [2024-12-08T14:13:38.145Z] Copying: 678/1024 [MB] (17 MBps) [2024-12-08T14:13:39.090Z] Copying: 694/1024 [MB] (15 MBps) [2024-12-08T14:13:40.028Z] Copying: 707/1024 [MB] (12 MBps) [2024-12-08T14:13:40.964Z] Copying: 722/1024 [MB] (15 MBps) [2024-12-08T14:13:42.337Z] Copying: 743/1024 [MB] (20 MBps) [2024-12-08T14:13:43.268Z] Copying: 763/1024 [MB] (19 MBps) [2024-12-08T14:13:44.202Z] Copying: 783/1024 [MB] (19 MBps) [2024-12-08T14:13:45.140Z] Copying: 804/1024 [MB] (20 MBps) [2024-12-08T14:13:46.126Z] Copying: 828/1024 [MB] (24 MBps) [2024-12-08T14:13:47.059Z] Copying: 848/1024 [MB] (19 MBps) [2024-12-08T14:13:47.991Z] Copying: 868/1024 [MB] (20 MBps) [2024-12-08T14:13:49.363Z] Copying: 893/1024 [MB] (24 MBps) [2024-12-08T14:13:49.934Z] Copying: 916/1024 [MB] (23 MBps) [2024-12-08T14:13:51.316Z] Copying: 931/1024 [MB] (15 MBps) [2024-12-08T14:13:52.256Z] Copying: 953/1024 [MB] (22 MBps) [2024-12-08T14:13:53.191Z] Copying: 966/1024 [MB] (12 MBps) [2024-12-08T14:13:54.125Z] Copying: 985/1024 [MB] (19 MBps) [2024-12-08T14:13:55.059Z] Copying: 1004/1024 [MB] (18 MBps) [2024-12-08T14:13:55.059Z] Copying: 1023/1024 [MB] (18 MBps) [2024-12-08T14:13:55.059Z] Copying: 1024/1024 [MB] (average 20 MBps)[2024-12-08 14:13:54.959194] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.139 [2024-12-08 14:13:54.959232] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:52.139 [2024-12-08 14:13:54.959242] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:18:52.139 [2024-12-08 14:13:54.959249] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.139 [2024-12-08 14:13:54.959265] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:52.139 [2024-12-08 14:13:54.961393] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.139 [2024-12-08 14:13:54.961419] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:52.139 [2024-12-08 14:13:54.961431] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.117 ms 00:18:52.139 [2024-12-08 14:13:54.961438] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.139 [2024-12-08 14:13:54.962940] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.139 [2024-12-08 14:13:54.963056] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:52.139 [2024-12-08 14:13:54.963069] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.485 ms 00:18:52.139 [2024-12-08 14:13:54.963076] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.139 [2024-12-08 14:13:54.980858] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.139 [2024-12-08 14:13:54.980886] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:52.139 [2024-12-08 14:13:54.980895] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.768 ms 00:18:52.139 [2024-12-08 14:13:54.980905] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.139 [2024-12-08 14:13:54.985505] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.139 [2024-12-08 14:13:54.985529] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:18:52.139 [2024-12-08 14:13:54.985537] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.576 ms 00:18:52.139 [2024-12-08 14:13:54.985544] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.140 [2024-12-08 14:13:55.004619] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.140 [2024-12-08 14:13:55.004648] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:52.140 [2024-12-08 14:13:55.004657] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.038 ms 00:18:52.140 [2024-12-08 14:13:55.004663] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.140 [2024-12-08 14:13:55.016911] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.140 [2024-12-08 14:13:55.016939] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:52.140 [2024-12-08 14:13:55.016948] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.221 ms 00:18:52.140 [2024-12-08 14:13:55.016954] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.140 [2024-12-08 14:13:55.017074] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.140 [2024-12-08 14:13:55.017082] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:52.140 [2024-12-08 14:13:55.017089] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:18:52.140 [2024-12-08 14:13:55.017095] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.140 [2024-12-08 14:13:55.036250] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.140 [2024-12-08 14:13:55.036276] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:52.140 [2024-12-08 14:13:55.036284] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.144 ms 00:18:52.140 [2024-12-08 14:13:55.036290] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.140 [2024-12-08 14:13:55.054142] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.140 [2024-12-08 14:13:55.054250] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:52.140 [2024-12-08 14:13:55.054263] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.825 ms 00:18:52.140 [2024-12-08 14:13:55.054275] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.401 [2024-12-08 14:13:55.071805] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.401 [2024-12-08 14:13:55.071832] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:52.401 [2024-12-08 14:13:55.071840] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.507 ms 00:18:52.401 [2024-12-08 14:13:55.071845] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.401 [2024-12-08 14:13:55.089690] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.401 [2024-12-08 14:13:55.089793] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:52.401 [2024-12-08 14:13:55.089806] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.794 ms 00:18:52.401 [2024-12-08 14:13:55.089811] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.401 [2024-12-08 14:13:55.089832] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:52.401 [2024-12-08 14:13:55.089844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:52.401 [2024-12-08 14:13:55.089856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:52.401 [2024-12-08 14:13:55.089862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:52.401 [2024-12-08 14:13:55.089868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:52.401 [2024-12-08 14:13:55.089874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:52.401 [2024-12-08 14:13:55.089880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:52.401 [2024-12-08 14:13:55.089885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:52.401 [2024-12-08 14:13:55.089890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:52.401 [2024-12-08 14:13:55.089896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:52.401 [2024-12-08 14:13:55.089901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:52.401 [2024-12-08 14:13:55.089907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:52.401 [2024-12-08 14:13:55.089912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:52.401 [2024-12-08 14:13:55.089918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:52.401 [2024-12-08 14:13:55.089923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:52.401 [2024-12-08 14:13:55.089929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:52.401 [2024-12-08 14:13:55.089934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:52.401 [2024-12-08 14:13:55.089940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:52.401 [2024-12-08 14:13:55.089945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:52.401 [2024-12-08 14:13:55.089951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:52.401 [2024-12-08 14:13:55.089957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:52.401 [2024-12-08 14:13:55.089962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:52.401 [2024-12-08 14:13:55.089968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:52.401 [2024-12-08 14:13:55.089973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:52.401 [2024-12-08 14:13:55.089979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:52.401 [2024-12-08 14:13:55.089999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:52.401 [2024-12-08 14:13:55.090004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:52.401 [2024-12-08 14:13:55.090010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:52.401 [2024-12-08 14:13:55.090017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:52.401 [2024-12-08 14:13:55.090022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:52.401 [2024-12-08 14:13:55.090028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:52.401 [2024-12-08 14:13:55.090034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:52.401 [2024-12-08 14:13:55.090040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:52.401 [2024-12-08 14:13:55.090045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:52.401 [2024-12-08 14:13:55.090051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:52.401 [2024-12-08 14:13:55.090056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:52.402 [2024-12-08 14:13:55.090430] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:52.402 [2024-12-08 14:13:55.090436] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: aa53147e-1f9e-4e05-b079-7ab61488c7ac 00:18:52.402 [2024-12-08 14:13:55.090441] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:52.402 [2024-12-08 14:13:55.090447] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:52.402 [2024-12-08 14:13:55.090452] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:52.402 [2024-12-08 14:13:55.090458] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:52.402 [2024-12-08 14:13:55.090463] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:52.402 [2024-12-08 14:13:55.090468] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:52.402 [2024-12-08 14:13:55.090473] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:52.402 [2024-12-08 14:13:55.090478] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:52.402 [2024-12-08 14:13:55.090487] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:52.402 [2024-12-08 14:13:55.090493] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.402 [2024-12-08 14:13:55.090498] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:52.402 [2024-12-08 14:13:55.090504] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.661 ms 00:18:52.402 [2024-12-08 14:13:55.090511] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.402 [2024-12-08 14:13:55.100293] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.402 [2024-12-08 14:13:55.100387] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:52.402 [2024-12-08 14:13:55.100398] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.761 ms 00:18:52.402 [2024-12-08 14:13:55.100403] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.402 [2024-12-08 14:13:55.100548] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.402 [2024-12-08 14:13:55.100555] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:52.402 [2024-12-08 14:13:55.100565] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:18:52.402 [2024-12-08 14:13:55.100570] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.402 [2024-12-08 14:13:55.128423] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:52.402 [2024-12-08 14:13:55.128452] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:52.402 [2024-12-08 14:13:55.128460] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:52.402 [2024-12-08 14:13:55.128466] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.402 [2024-12-08 14:13:55.128508] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:52.402 [2024-12-08 14:13:55.128514] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:52.403 [2024-12-08 14:13:55.128523] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:52.403 [2024-12-08 14:13:55.128528] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.403 [2024-12-08 14:13:55.128575] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:52.403 [2024-12-08 14:13:55.128582] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:52.403 [2024-12-08 14:13:55.128588] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:52.403 [2024-12-08 14:13:55.128594] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.403 [2024-12-08 14:13:55.128605] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:52.403 [2024-12-08 14:13:55.128611] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:52.403 [2024-12-08 14:13:55.128616] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:52.403 [2024-12-08 14:13:55.128624] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.403 [2024-12-08 14:13:55.185770] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:52.403 [2024-12-08 14:13:55.185801] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:52.403 [2024-12-08 14:13:55.185810] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:52.403 [2024-12-08 14:13:55.185821] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.403 [2024-12-08 14:13:55.208469] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:52.403 [2024-12-08 14:13:55.208496] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:52.403 [2024-12-08 14:13:55.208504] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:52.403 [2024-12-08 14:13:55.208514] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.403 [2024-12-08 14:13:55.208554] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:52.403 [2024-12-08 14:13:55.208561] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:52.403 [2024-12-08 14:13:55.208567] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:52.403 [2024-12-08 14:13:55.208572] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.403 [2024-12-08 14:13:55.208602] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:52.403 [2024-12-08 14:13:55.208608] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:52.403 [2024-12-08 14:13:55.208614] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:52.403 [2024-12-08 14:13:55.208619] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.403 [2024-12-08 14:13:55.208687] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:52.403 [2024-12-08 14:13:55.208694] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:52.403 [2024-12-08 14:13:55.208700] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:52.403 [2024-12-08 14:13:55.208706] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.403 [2024-12-08 14:13:55.208727] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:52.403 [2024-12-08 14:13:55.208733] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:52.403 [2024-12-08 14:13:55.208739] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:52.403 [2024-12-08 14:13:55.208745] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.403 [2024-12-08 14:13:55.208772] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:52.403 [2024-12-08 14:13:55.208778] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:52.403 [2024-12-08 14:13:55.208784] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:52.403 [2024-12-08 14:13:55.208790] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.403 [2024-12-08 14:13:55.208821] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:52.403 [2024-12-08 14:13:55.208828] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:52.403 [2024-12-08 14:13:55.208834] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:52.403 [2024-12-08 14:13:55.208840] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.403 [2024-12-08 14:13:55.208927] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 249.713 ms, result 0 00:18:53.339 00:18:53.339 00:18:53.339 14:13:56 -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:18:53.598 [2024-12-08 14:13:56.259711] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:53.598 [2024-12-08 14:13:56.259832] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73605 ] 00:18:53.598 [2024-12-08 14:13:56.407858] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.857 [2024-12-08 14:13:56.542527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.857 [2024-12-08 14:13:56.748157] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:53.857 [2024-12-08 14:13:56.748207] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:54.119 [2024-12-08 14:13:56.897868] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.119 [2024-12-08 14:13:56.897912] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:54.119 [2024-12-08 14:13:56.897925] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:54.119 [2024-12-08 14:13:56.897935] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.119 [2024-12-08 14:13:56.897999] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.119 [2024-12-08 14:13:56.898011] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:54.119 [2024-12-08 14:13:56.898019] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:18:54.119 [2024-12-08 14:13:56.898026] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.119 [2024-12-08 14:13:56.898042] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:54.119 [2024-12-08 14:13:56.898751] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:54.119 [2024-12-08 14:13:56.898778] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.119 [2024-12-08 14:13:56.898786] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:54.119 [2024-12-08 14:13:56.898794] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.740 ms 00:18:54.119 [2024-12-08 14:13:56.898801] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.119 [2024-12-08 14:13:56.899820] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:54.119 [2024-12-08 14:13:56.912438] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.119 [2024-12-08 14:13:56.912480] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:54.119 [2024-12-08 14:13:56.912490] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.619 ms 00:18:54.119 [2024-12-08 14:13:56.912498] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.119 [2024-12-08 14:13:56.912546] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.119 [2024-12-08 14:13:56.912555] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:54.119 [2024-12-08 14:13:56.912563] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:18:54.119 [2024-12-08 14:13:56.912569] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.119 [2024-12-08 14:13:56.917376] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.119 [2024-12-08 14:13:56.917404] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:54.119 [2024-12-08 14:13:56.917413] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.752 ms 00:18:54.119 [2024-12-08 14:13:56.917420] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.119 [2024-12-08 14:13:56.917494] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.119 [2024-12-08 14:13:56.917503] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:54.119 [2024-12-08 14:13:56.917510] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:18:54.119 [2024-12-08 14:13:56.917518] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.119 [2024-12-08 14:13:56.917559] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.119 [2024-12-08 14:13:56.917568] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:54.119 [2024-12-08 14:13:56.917575] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:54.119 [2024-12-08 14:13:56.917582] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.119 [2024-12-08 14:13:56.917608] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:54.119 [2024-12-08 14:13:56.921077] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.119 [2024-12-08 14:13:56.921103] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:54.119 [2024-12-08 14:13:56.921112] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.479 ms 00:18:54.119 [2024-12-08 14:13:56.921119] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.119 [2024-12-08 14:13:56.921165] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.119 [2024-12-08 14:13:56.921174] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:54.119 [2024-12-08 14:13:56.921181] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:18:54.119 [2024-12-08 14:13:56.921191] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.119 [2024-12-08 14:13:56.921210] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:54.119 [2024-12-08 14:13:56.921227] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:18:54.119 [2024-12-08 14:13:56.921259] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:54.119 [2024-12-08 14:13:56.921273] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:18:54.119 [2024-12-08 14:13:56.921345] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:18:54.119 [2024-12-08 14:13:56.921355] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:54.119 [2024-12-08 14:13:56.921367] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:18:54.119 [2024-12-08 14:13:56.921376] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:54.119 [2024-12-08 14:13:56.921385] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:54.119 [2024-12-08 14:13:56.921392] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:54.119 [2024-12-08 14:13:56.921400] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:54.119 [2024-12-08 14:13:56.921407] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:18:54.119 [2024-12-08 14:13:56.921414] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:18:54.119 [2024-12-08 14:13:56.921421] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.119 [2024-12-08 14:13:56.921428] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:54.119 [2024-12-08 14:13:56.921435] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.213 ms 00:18:54.119 [2024-12-08 14:13:56.921441] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.119 [2024-12-08 14:13:56.921501] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.119 [2024-12-08 14:13:56.921509] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:54.119 [2024-12-08 14:13:56.921516] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:18:54.119 [2024-12-08 14:13:56.921522] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.119 [2024-12-08 14:13:56.921599] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:54.119 [2024-12-08 14:13:56.921609] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:54.119 [2024-12-08 14:13:56.921617] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:54.119 [2024-12-08 14:13:56.921624] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:54.119 [2024-12-08 14:13:56.921631] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:54.119 [2024-12-08 14:13:56.921638] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:54.119 [2024-12-08 14:13:56.921644] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:54.119 [2024-12-08 14:13:56.921651] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:54.119 [2024-12-08 14:13:56.921658] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:54.119 [2024-12-08 14:13:56.921664] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:54.119 [2024-12-08 14:13:56.921670] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:54.119 [2024-12-08 14:13:56.921676] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:54.119 [2024-12-08 14:13:56.921683] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:54.119 [2024-12-08 14:13:56.921691] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:54.119 [2024-12-08 14:13:56.921698] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:18:54.119 [2024-12-08 14:13:56.921704] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:54.119 [2024-12-08 14:13:56.921716] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:54.119 [2024-12-08 14:13:56.921722] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:18:54.119 [2024-12-08 14:13:56.921728] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:54.119 [2024-12-08 14:13:56.921734] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:18:54.119 [2024-12-08 14:13:56.921741] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:18:54.119 [2024-12-08 14:13:56.921747] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:18:54.119 [2024-12-08 14:13:56.921753] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:54.119 [2024-12-08 14:13:56.921760] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:54.119 [2024-12-08 14:13:56.921766] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:54.119 [2024-12-08 14:13:56.921772] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:54.119 [2024-12-08 14:13:56.921778] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:18:54.119 [2024-12-08 14:13:56.921784] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:54.119 [2024-12-08 14:13:56.921791] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:54.119 [2024-12-08 14:13:56.921797] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:54.119 [2024-12-08 14:13:56.921803] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:54.119 [2024-12-08 14:13:56.921809] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:54.119 [2024-12-08 14:13:56.921815] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:18:54.120 [2024-12-08 14:13:56.921821] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:54.120 [2024-12-08 14:13:56.921828] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:54.120 [2024-12-08 14:13:56.921834] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:54.120 [2024-12-08 14:13:56.921840] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:54.120 [2024-12-08 14:13:56.921846] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:54.120 [2024-12-08 14:13:56.921852] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:18:54.120 [2024-12-08 14:13:56.921858] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:54.120 [2024-12-08 14:13:56.921864] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:54.120 [2024-12-08 14:13:56.921873] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:54.120 [2024-12-08 14:13:56.921880] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:54.120 [2024-12-08 14:13:56.921887] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:54.120 [2024-12-08 14:13:56.921894] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:54.120 [2024-12-08 14:13:56.921902] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:54.120 [2024-12-08 14:13:56.921908] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:54.120 [2024-12-08 14:13:56.921915] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:54.120 [2024-12-08 14:13:56.921921] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:54.120 [2024-12-08 14:13:56.921927] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:54.120 [2024-12-08 14:13:56.921934] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:54.120 [2024-12-08 14:13:56.921943] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:54.120 [2024-12-08 14:13:56.921951] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:54.120 [2024-12-08 14:13:56.921959] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:18:54.120 [2024-12-08 14:13:56.921966] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:18:54.120 [2024-12-08 14:13:56.921972] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:18:54.120 [2024-12-08 14:13:56.921997] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:18:54.120 [2024-12-08 14:13:56.922005] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:18:54.120 [2024-12-08 14:13:56.922012] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:18:54.120 [2024-12-08 14:13:56.922019] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:18:54.120 [2024-12-08 14:13:56.922026] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:18:54.120 [2024-12-08 14:13:56.922034] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:18:54.120 [2024-12-08 14:13:56.922040] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:18:54.120 [2024-12-08 14:13:56.922047] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:18:54.120 [2024-12-08 14:13:56.922055] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:18:54.120 [2024-12-08 14:13:56.922061] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:54.120 [2024-12-08 14:13:56.922069] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:54.120 [2024-12-08 14:13:56.922076] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:54.120 [2024-12-08 14:13:56.922083] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:54.120 [2024-12-08 14:13:56.922090] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:54.120 [2024-12-08 14:13:56.922098] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:54.120 [2024-12-08 14:13:56.922106] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.120 [2024-12-08 14:13:56.922113] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:54.120 [2024-12-08 14:13:56.922120] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.550 ms 00:18:54.120 [2024-12-08 14:13:56.922126] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.120 [2024-12-08 14:13:56.936748] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.120 [2024-12-08 14:13:56.936783] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:54.120 [2024-12-08 14:13:56.936792] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.585 ms 00:18:54.120 [2024-12-08 14:13:56.936803] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.120 [2024-12-08 14:13:56.936882] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.120 [2024-12-08 14:13:56.936890] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:54.120 [2024-12-08 14:13:56.936898] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:18:54.120 [2024-12-08 14:13:56.936904] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.120 [2024-12-08 14:13:56.975822] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.120 [2024-12-08 14:13:56.975862] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:54.120 [2024-12-08 14:13:56.975873] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.877 ms 00:18:54.120 [2024-12-08 14:13:56.975881] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.120 [2024-12-08 14:13:56.975918] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.120 [2024-12-08 14:13:56.975927] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:54.120 [2024-12-08 14:13:56.975935] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:54.120 [2024-12-08 14:13:56.975942] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.120 [2024-12-08 14:13:56.976295] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.120 [2024-12-08 14:13:56.976310] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:54.120 [2024-12-08 14:13:56.976319] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:18:54.120 [2024-12-08 14:13:56.976329] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.120 [2024-12-08 14:13:56.976436] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.120 [2024-12-08 14:13:56.976445] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:54.120 [2024-12-08 14:13:56.976453] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:18:54.120 [2024-12-08 14:13:56.976460] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.120 [2024-12-08 14:13:56.990093] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.120 [2024-12-08 14:13:56.990122] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:54.120 [2024-12-08 14:13:56.990132] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.615 ms 00:18:54.120 [2024-12-08 14:13:56.990139] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.120 [2024-12-08 14:13:57.003089] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:54.120 [2024-12-08 14:13:57.003122] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:54.120 [2024-12-08 14:13:57.003132] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.120 [2024-12-08 14:13:57.003139] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:54.120 [2024-12-08 14:13:57.003147] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.912 ms 00:18:54.120 [2024-12-08 14:13:57.003154] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.120 [2024-12-08 14:13:57.027525] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.120 [2024-12-08 14:13:57.027567] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:54.120 [2024-12-08 14:13:57.027577] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.334 ms 00:18:54.120 [2024-12-08 14:13:57.027584] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.383 [2024-12-08 14:13:57.039725] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.383 [2024-12-08 14:13:57.039755] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:54.383 [2024-12-08 14:13:57.039765] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.106 ms 00:18:54.383 [2024-12-08 14:13:57.039771] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.383 [2024-12-08 14:13:57.051567] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.383 [2024-12-08 14:13:57.051601] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:54.383 [2024-12-08 14:13:57.051611] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.765 ms 00:18:54.383 [2024-12-08 14:13:57.051617] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.383 [2024-12-08 14:13:57.051961] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.383 [2024-12-08 14:13:57.051972] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:54.383 [2024-12-08 14:13:57.051997] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:18:54.383 [2024-12-08 14:13:57.052005] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.383 [2024-12-08 14:13:57.110049] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.383 [2024-12-08 14:13:57.110088] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:54.383 [2024-12-08 14:13:57.110099] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.029 ms 00:18:54.383 [2024-12-08 14:13:57.110106] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.383 [2024-12-08 14:13:57.120708] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:54.383 [2024-12-08 14:13:57.122900] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.383 [2024-12-08 14:13:57.122931] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:54.383 [2024-12-08 14:13:57.122942] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.758 ms 00:18:54.383 [2024-12-08 14:13:57.122954] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.383 [2024-12-08 14:13:57.123025] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.383 [2024-12-08 14:13:57.123037] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:54.383 [2024-12-08 14:13:57.123046] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:54.383 [2024-12-08 14:13:57.123054] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.383 [2024-12-08 14:13:57.123111] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.383 [2024-12-08 14:13:57.123122] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:54.383 [2024-12-08 14:13:57.123130] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:18:54.383 [2024-12-08 14:13:57.123137] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.383 [2024-12-08 14:13:57.124263] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.383 [2024-12-08 14:13:57.124391] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:18:54.383 [2024-12-08 14:13:57.124408] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.110 ms 00:18:54.383 [2024-12-08 14:13:57.124415] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.383 [2024-12-08 14:13:57.124442] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.383 [2024-12-08 14:13:57.124450] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:54.383 [2024-12-08 14:13:57.124464] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:54.383 [2024-12-08 14:13:57.124471] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.383 [2024-12-08 14:13:57.124500] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:54.383 [2024-12-08 14:13:57.124509] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.383 [2024-12-08 14:13:57.124519] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:54.383 [2024-12-08 14:13:57.124526] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:54.383 [2024-12-08 14:13:57.124532] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.383 [2024-12-08 14:13:57.148302] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.383 [2024-12-08 14:13:57.148335] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:54.383 [2024-12-08 14:13:57.148346] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.754 ms 00:18:54.383 [2024-12-08 14:13:57.148353] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.383 [2024-12-08 14:13:57.148420] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.383 [2024-12-08 14:13:57.148429] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:54.383 [2024-12-08 14:13:57.148437] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:18:54.383 [2024-12-08 14:13:57.148444] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.383 [2024-12-08 14:13:57.149424] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 251.133 ms, result 0 00:18:55.769  [2024-12-08T14:13:59.632Z] Copying: 10/1024 [MB] (10 MBps) [2024-12-08T14:14:00.579Z] Copying: 27/1024 [MB] (16 MBps) [2024-12-08T14:14:01.519Z] Copying: 50/1024 [MB] (22 MBps) [2024-12-08T14:14:02.458Z] Copying: 70/1024 [MB] (20 MBps) [2024-12-08T14:14:03.394Z] Copying: 96/1024 [MB] (25 MBps) [2024-12-08T14:14:04.332Z] Copying: 117/1024 [MB] (21 MBps) [2024-12-08T14:14:05.738Z] Copying: 131/1024 [MB] (13 MBps) [2024-12-08T14:14:06.679Z] Copying: 143/1024 [MB] (12 MBps) [2024-12-08T14:14:07.619Z] Copying: 168/1024 [MB] (25 MBps) [2024-12-08T14:14:08.561Z] Copying: 191/1024 [MB] (22 MBps) [2024-12-08T14:14:09.503Z] Copying: 220/1024 [MB] (29 MBps) [2024-12-08T14:14:10.443Z] Copying: 246/1024 [MB] (26 MBps) [2024-12-08T14:14:11.390Z] Copying: 269/1024 [MB] (23 MBps) [2024-12-08T14:14:12.337Z] Copying: 284/1024 [MB] (14 MBps) [2024-12-08T14:14:13.729Z] Copying: 295/1024 [MB] (10 MBps) [2024-12-08T14:14:14.677Z] Copying: 305/1024 [MB] (10 MBps) [2024-12-08T14:14:15.633Z] Copying: 318/1024 [MB] (12 MBps) [2024-12-08T14:14:16.575Z] Copying: 334/1024 [MB] (15 MBps) [2024-12-08T14:14:17.602Z] Copying: 357/1024 [MB] (22 MBps) [2024-12-08T14:14:18.546Z] Copying: 380/1024 [MB] (23 MBps) [2024-12-08T14:14:19.491Z] Copying: 399/1024 [MB] (19 MBps) [2024-12-08T14:14:20.432Z] Copying: 424/1024 [MB] (24 MBps) [2024-12-08T14:14:21.373Z] Copying: 442/1024 [MB] (17 MBps) [2024-12-08T14:14:22.409Z] Copying: 460/1024 [MB] (18 MBps) [2024-12-08T14:14:23.348Z] Copying: 473/1024 [MB] (12 MBps) [2024-12-08T14:14:24.731Z] Copying: 485/1024 [MB] (12 MBps) [2024-12-08T14:14:25.681Z] Copying: 500/1024 [MB] (14 MBps) [2024-12-08T14:14:26.624Z] Copying: 510/1024 [MB] (10 MBps) [2024-12-08T14:14:27.569Z] Copying: 523/1024 [MB] (13 MBps) [2024-12-08T14:14:28.513Z] Copying: 539/1024 [MB] (15 MBps) [2024-12-08T14:14:29.458Z] Copying: 552/1024 [MB] (12 MBps) [2024-12-08T14:14:30.401Z] Copying: 571/1024 [MB] (19 MBps) [2024-12-08T14:14:31.343Z] Copying: 583/1024 [MB] (11 MBps) [2024-12-08T14:14:32.733Z] Copying: 594/1024 [MB] (11 MBps) [2024-12-08T14:14:33.677Z] Copying: 609/1024 [MB] (14 MBps) [2024-12-08T14:14:34.619Z] Copying: 623/1024 [MB] (14 MBps) [2024-12-08T14:14:35.562Z] Copying: 639/1024 [MB] (15 MBps) [2024-12-08T14:14:36.506Z] Copying: 654/1024 [MB] (14 MBps) [2024-12-08T14:14:37.452Z] Copying: 667/1024 [MB] (13 MBps) [2024-12-08T14:14:38.398Z] Copying: 682/1024 [MB] (14 MBps) [2024-12-08T14:14:39.341Z] Copying: 697/1024 [MB] (15 MBps) [2024-12-08T14:14:40.741Z] Copying: 707/1024 [MB] (10 MBps) [2024-12-08T14:14:41.681Z] Copying: 720/1024 [MB] (12 MBps) [2024-12-08T14:14:42.628Z] Copying: 733/1024 [MB] (13 MBps) [2024-12-08T14:14:43.566Z] Copying: 749/1024 [MB] (15 MBps) [2024-12-08T14:14:44.506Z] Copying: 761/1024 [MB] (11 MBps) [2024-12-08T14:14:45.446Z] Copying: 776/1024 [MB] (14 MBps) [2024-12-08T14:14:46.386Z] Copying: 786/1024 [MB] (10 MBps) [2024-12-08T14:14:47.326Z] Copying: 797/1024 [MB] (10 MBps) [2024-12-08T14:14:48.714Z] Copying: 813/1024 [MB] (16 MBps) [2024-12-08T14:14:49.347Z] Copying: 825/1024 [MB] (11 MBps) [2024-12-08T14:14:50.735Z] Copying: 840/1024 [MB] (15 MBps) [2024-12-08T14:14:51.679Z] Copying: 861/1024 [MB] (20 MBps) [2024-12-08T14:14:52.624Z] Copying: 879/1024 [MB] (17 MBps) [2024-12-08T14:14:53.569Z] Copying: 897/1024 [MB] (18 MBps) [2024-12-08T14:14:54.515Z] Copying: 913/1024 [MB] (15 MBps) [2024-12-08T14:14:55.459Z] Copying: 926/1024 [MB] (13 MBps) [2024-12-08T14:14:56.400Z] Copying: 943/1024 [MB] (16 MBps) [2024-12-08T14:14:57.342Z] Copying: 955/1024 [MB] (12 MBps) [2024-12-08T14:14:58.728Z] Copying: 966/1024 [MB] (11 MBps) [2024-12-08T14:14:59.673Z] Copying: 977/1024 [MB] (10 MBps) [2024-12-08T14:15:00.616Z] Copying: 996/1024 [MB] (18 MBps) [2024-12-08T14:15:01.189Z] Copying: 1012/1024 [MB] (15 MBps) [2024-12-08T14:15:01.189Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-12-08 14:15:00.981402] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.269 [2024-12-08 14:15:00.981493] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:58.269 [2024-12-08 14:15:00.981515] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:58.269 [2024-12-08 14:15:00.981528] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.269 [2024-12-08 14:15:00.981565] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:58.269 [2024-12-08 14:15:00.985463] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.269 [2024-12-08 14:15:00.985663] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:58.269 [2024-12-08 14:15:00.985889] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.875 ms 00:19:58.269 [2024-12-08 14:15:00.985940] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.269 [2024-12-08 14:15:00.987068] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.269 [2024-12-08 14:15:00.987133] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:58.269 [2024-12-08 14:15:00.987242] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.056 ms 00:19:58.269 [2024-12-08 14:15:00.987275] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.269 [2024-12-08 14:15:00.991800] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.269 [2024-12-08 14:15:00.991926] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:58.269 [2024-12-08 14:15:00.992209] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.486 ms 00:19:58.269 [2024-12-08 14:15:00.992263] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.269 [2024-12-08 14:15:00.998526] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.269 [2024-12-08 14:15:00.998682] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:19:58.269 [2024-12-08 14:15:00.998703] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.210 ms 00:19:58.269 [2024-12-08 14:15:00.998711] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.269 [2024-12-08 14:15:01.025739] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.269 [2024-12-08 14:15:01.025926] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:58.269 [2024-12-08 14:15:01.025947] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.945 ms 00:19:58.269 [2024-12-08 14:15:01.025956] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.269 [2024-12-08 14:15:01.043001] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.269 [2024-12-08 14:15:01.043053] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:58.269 [2024-12-08 14:15:01.043066] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.905 ms 00:19:58.269 [2024-12-08 14:15:01.043082] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.269 [2024-12-08 14:15:01.043247] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.269 [2024-12-08 14:15:01.043259] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:58.269 [2024-12-08 14:15:01.043269] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:19:58.269 [2024-12-08 14:15:01.043277] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.269 [2024-12-08 14:15:01.069764] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.269 [2024-12-08 14:15:01.069814] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:58.269 [2024-12-08 14:15:01.069827] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.471 ms 00:19:58.269 [2024-12-08 14:15:01.069834] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.269 [2024-12-08 14:15:01.095934] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.269 [2024-12-08 14:15:01.096004] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:58.269 [2024-12-08 14:15:01.096030] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.053 ms 00:19:58.269 [2024-12-08 14:15:01.096037] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.269 [2024-12-08 14:15:01.121288] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.269 [2024-12-08 14:15:01.121336] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:58.269 [2024-12-08 14:15:01.121348] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.204 ms 00:19:58.269 [2024-12-08 14:15:01.121355] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.269 [2024-12-08 14:15:01.146584] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.270 [2024-12-08 14:15:01.146632] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:58.270 [2024-12-08 14:15:01.146644] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.123 ms 00:19:58.270 [2024-12-08 14:15:01.146651] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.270 [2024-12-08 14:15:01.146696] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:58.270 [2024-12-08 14:15:01.146720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.146730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.146739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.146747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.146755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.146763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.146771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.146779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.146786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.146794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.146802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.146810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.146817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.146825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.146832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.146839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.146847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.146854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.146861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.146868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.146876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.146883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.146891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.146898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.146905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.146913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.146922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.146929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.146937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.146946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.146954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.146962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.146970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.146977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.147007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.147015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.147040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.147048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.147056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.147064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.147072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.147080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.147087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.147095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.147103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.147111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.147119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.147126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.147134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.147142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.147149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.147156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.147164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.147172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.147179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.147187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.147195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.147203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.147210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.147217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.147224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.147235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.147243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.147250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.147259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.147267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.147274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.147282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.147289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.147297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.147304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.147312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.147319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.147327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.147335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.147343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.147351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.147358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.147365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.147373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.147380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.147387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.147395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.147403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:58.270 [2024-12-08 14:15:01.147412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:58.271 [2024-12-08 14:15:01.147419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:58.271 [2024-12-08 14:15:01.147427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:58.271 [2024-12-08 14:15:01.147434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:58.271 [2024-12-08 14:15:01.147441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:58.271 [2024-12-08 14:15:01.147449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:58.271 [2024-12-08 14:15:01.147457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:58.271 [2024-12-08 14:15:01.147464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:58.271 [2024-12-08 14:15:01.147471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:58.271 [2024-12-08 14:15:01.147481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:58.271 [2024-12-08 14:15:01.147489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:58.271 [2024-12-08 14:15:01.147497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:58.271 [2024-12-08 14:15:01.147504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:58.271 [2024-12-08 14:15:01.147512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:58.271 [2024-12-08 14:15:01.147520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:58.271 [2024-12-08 14:15:01.147528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:58.271 [2024-12-08 14:15:01.147544] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:58.271 [2024-12-08 14:15:01.147552] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: aa53147e-1f9e-4e05-b079-7ab61488c7ac 00:19:58.271 [2024-12-08 14:15:01.147560] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:58.271 [2024-12-08 14:15:01.147567] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:58.271 [2024-12-08 14:15:01.147575] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:58.271 [2024-12-08 14:15:01.147584] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:58.271 [2024-12-08 14:15:01.147590] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:58.271 [2024-12-08 14:15:01.147598] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:58.271 [2024-12-08 14:15:01.147606] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:58.271 [2024-12-08 14:15:01.147619] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:58.271 [2024-12-08 14:15:01.147626] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:58.271 [2024-12-08 14:15:01.147634] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.271 [2024-12-08 14:15:01.147642] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:58.271 [2024-12-08 14:15:01.147653] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.939 ms 00:19:58.271 [2024-12-08 14:15:01.147660] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.271 [2024-12-08 14:15:01.161551] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.271 [2024-12-08 14:15:01.161596] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:58.271 [2024-12-08 14:15:01.161607] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.854 ms 00:19:58.271 [2024-12-08 14:15:01.161615] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.271 [2024-12-08 14:15:01.161849] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.271 [2024-12-08 14:15:01.161865] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:58.271 [2024-12-08 14:15:01.161874] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:19:58.271 [2024-12-08 14:15:01.161881] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.533 [2024-12-08 14:15:01.201394] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.533 [2024-12-08 14:15:01.201596] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:58.533 [2024-12-08 14:15:01.201617] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.533 [2024-12-08 14:15:01.201628] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.533 [2024-12-08 14:15:01.201696] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.533 [2024-12-08 14:15:01.201711] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:58.533 [2024-12-08 14:15:01.201720] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.533 [2024-12-08 14:15:01.201728] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.533 [2024-12-08 14:15:01.201807] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.533 [2024-12-08 14:15:01.201818] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:58.533 [2024-12-08 14:15:01.201826] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.533 [2024-12-08 14:15:01.201834] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.533 [2024-12-08 14:15:01.201850] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.533 [2024-12-08 14:15:01.201859] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:58.533 [2024-12-08 14:15:01.201871] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.533 [2024-12-08 14:15:01.201879] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.533 [2024-12-08 14:15:01.283708] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.533 [2024-12-08 14:15:01.283762] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:58.533 [2024-12-08 14:15:01.283774] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.533 [2024-12-08 14:15:01.283783] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.533 [2024-12-08 14:15:01.316613] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.533 [2024-12-08 14:15:01.316663] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:58.533 [2024-12-08 14:15:01.316683] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.533 [2024-12-08 14:15:01.316691] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.533 [2024-12-08 14:15:01.316759] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.533 [2024-12-08 14:15:01.316769] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:58.533 [2024-12-08 14:15:01.316778] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.533 [2024-12-08 14:15:01.316786] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.533 [2024-12-08 14:15:01.316830] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.533 [2024-12-08 14:15:01.316839] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:58.533 [2024-12-08 14:15:01.316848] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.533 [2024-12-08 14:15:01.316860] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.533 [2024-12-08 14:15:01.316958] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.533 [2024-12-08 14:15:01.316969] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:58.533 [2024-12-08 14:15:01.316978] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.533 [2024-12-08 14:15:01.317026] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.533 [2024-12-08 14:15:01.317065] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.533 [2024-12-08 14:15:01.317075] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:58.533 [2024-12-08 14:15:01.317084] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.534 [2024-12-08 14:15:01.317092] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.534 [2024-12-08 14:15:01.317161] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.534 [2024-12-08 14:15:01.317173] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:58.534 [2024-12-08 14:15:01.317181] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.534 [2024-12-08 14:15:01.317189] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.534 [2024-12-08 14:15:01.317235] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.534 [2024-12-08 14:15:01.317245] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:58.534 [2024-12-08 14:15:01.317254] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.534 [2024-12-08 14:15:01.317265] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.534 [2024-12-08 14:15:01.317400] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 335.976 ms, result 0 00:19:59.476 00:19:59.476 00:19:59.476 14:15:02 -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:02.023 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:20:02.023 14:15:04 -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:20:02.023 [2024-12-08 14:15:04.437366] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:02.023 [2024-12-08 14:15:04.437463] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74316 ] 00:20:02.023 [2024-12-08 14:15:04.582438] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.023 [2024-12-08 14:15:04.786013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:02.285 [2024-12-08 14:15:05.067500] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:02.285 [2024-12-08 14:15:05.067681] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:02.549 [2024-12-08 14:15:05.218353] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.549 [2024-12-08 14:15:05.218400] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:02.549 [2024-12-08 14:15:05.218413] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:02.549 [2024-12-08 14:15:05.218424] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.549 [2024-12-08 14:15:05.218471] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.549 [2024-12-08 14:15:05.218481] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:02.549 [2024-12-08 14:15:05.218489] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:20:02.549 [2024-12-08 14:15:05.218497] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.549 [2024-12-08 14:15:05.218516] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:02.549 [2024-12-08 14:15:05.219834] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:02.549 [2024-12-08 14:15:05.219882] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.549 [2024-12-08 14:15:05.219892] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:02.549 [2024-12-08 14:15:05.219901] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.370 ms 00:20:02.549 [2024-12-08 14:15:05.219909] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.549 [2024-12-08 14:15:05.221183] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:02.549 [2024-12-08 14:15:05.234239] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.549 [2024-12-08 14:15:05.234275] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:02.549 [2024-12-08 14:15:05.234286] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.057 ms 00:20:02.549 [2024-12-08 14:15:05.234294] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.549 [2024-12-08 14:15:05.234353] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.549 [2024-12-08 14:15:05.234363] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:02.549 [2024-12-08 14:15:05.234371] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:20:02.549 [2024-12-08 14:15:05.234378] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.549 [2024-12-08 14:15:05.240309] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.549 [2024-12-08 14:15:05.240344] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:02.549 [2024-12-08 14:15:05.240353] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.870 ms 00:20:02.549 [2024-12-08 14:15:05.240361] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.549 [2024-12-08 14:15:05.240462] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.549 [2024-12-08 14:15:05.240476] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:02.549 [2024-12-08 14:15:05.240488] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:20:02.549 [2024-12-08 14:15:05.240496] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.549 [2024-12-08 14:15:05.240552] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.549 [2024-12-08 14:15:05.240561] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:02.549 [2024-12-08 14:15:05.240569] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:02.549 [2024-12-08 14:15:05.240576] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.549 [2024-12-08 14:15:05.240606] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:02.549 [2024-12-08 14:15:05.244324] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.549 [2024-12-08 14:15:05.244353] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:02.549 [2024-12-08 14:15:05.244363] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.731 ms 00:20:02.549 [2024-12-08 14:15:05.244370] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.549 [2024-12-08 14:15:05.244401] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.549 [2024-12-08 14:15:05.244409] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:02.549 [2024-12-08 14:15:05.244417] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:02.549 [2024-12-08 14:15:05.244426] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.549 [2024-12-08 14:15:05.244464] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:02.549 [2024-12-08 14:15:05.244484] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:20:02.549 [2024-12-08 14:15:05.244517] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:02.549 [2024-12-08 14:15:05.244532] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:20:02.549 [2024-12-08 14:15:05.244610] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:20:02.549 [2024-12-08 14:15:05.244620] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:02.549 [2024-12-08 14:15:05.244632] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:20:02.549 [2024-12-08 14:15:05.244642] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:02.549 [2024-12-08 14:15:05.244650] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:02.549 [2024-12-08 14:15:05.244658] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:02.549 [2024-12-08 14:15:05.244664] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:02.549 [2024-12-08 14:15:05.244671] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:20:02.549 [2024-12-08 14:15:05.244678] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:20:02.549 [2024-12-08 14:15:05.244686] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.549 [2024-12-08 14:15:05.244693] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:02.549 [2024-12-08 14:15:05.244701] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.224 ms 00:20:02.549 [2024-12-08 14:15:05.244708] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.549 [2024-12-08 14:15:05.244768] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.549 [2024-12-08 14:15:05.244776] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:02.549 [2024-12-08 14:15:05.244784] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:20:02.549 [2024-12-08 14:15:05.244790] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.549 [2024-12-08 14:15:05.244862] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:02.549 [2024-12-08 14:15:05.244872] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:02.549 [2024-12-08 14:15:05.244879] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:02.549 [2024-12-08 14:15:05.244886] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:02.549 [2024-12-08 14:15:05.244893] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:02.549 [2024-12-08 14:15:05.244899] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:02.549 [2024-12-08 14:15:05.244906] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:02.549 [2024-12-08 14:15:05.244914] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:02.549 [2024-12-08 14:15:05.244920] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:02.549 [2024-12-08 14:15:05.244927] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:02.549 [2024-12-08 14:15:05.244934] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:02.549 [2024-12-08 14:15:05.244942] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:02.549 [2024-12-08 14:15:05.244948] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:02.549 [2024-12-08 14:15:05.244954] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:02.549 [2024-12-08 14:15:05.244961] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:20:02.549 [2024-12-08 14:15:05.244967] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:02.549 [2024-12-08 14:15:05.244998] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:02.549 [2024-12-08 14:15:05.245005] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:20:02.549 [2024-12-08 14:15:05.245011] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:02.550 [2024-12-08 14:15:05.245018] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:20:02.550 [2024-12-08 14:15:05.245025] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:20:02.550 [2024-12-08 14:15:05.245031] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:20:02.550 [2024-12-08 14:15:05.245038] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:02.550 [2024-12-08 14:15:05.245044] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:02.550 [2024-12-08 14:15:05.245051] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:02.550 [2024-12-08 14:15:05.245057] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:02.550 [2024-12-08 14:15:05.245063] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:20:02.550 [2024-12-08 14:15:05.245070] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:02.550 [2024-12-08 14:15:05.245076] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:02.550 [2024-12-08 14:15:05.245083] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:02.550 [2024-12-08 14:15:05.245089] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:02.550 [2024-12-08 14:15:05.245095] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:02.550 [2024-12-08 14:15:05.245102] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:20:02.550 [2024-12-08 14:15:05.245127] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:02.550 [2024-12-08 14:15:05.245140] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:02.550 [2024-12-08 14:15:05.245146] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:02.550 [2024-12-08 14:15:05.245153] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:02.550 [2024-12-08 14:15:05.245160] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:02.550 [2024-12-08 14:15:05.245166] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:20:02.550 [2024-12-08 14:15:05.245172] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:02.550 [2024-12-08 14:15:05.245178] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:02.550 [2024-12-08 14:15:05.245188] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:02.550 [2024-12-08 14:15:05.245196] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:02.550 [2024-12-08 14:15:05.245204] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:02.550 [2024-12-08 14:15:05.245212] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:02.550 [2024-12-08 14:15:05.245218] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:02.550 [2024-12-08 14:15:05.245226] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:02.550 [2024-12-08 14:15:05.245233] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:02.550 [2024-12-08 14:15:05.245239] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:02.550 [2024-12-08 14:15:05.245246] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:02.550 [2024-12-08 14:15:05.245254] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:02.550 [2024-12-08 14:15:05.245263] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:02.550 [2024-12-08 14:15:05.245271] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:02.550 [2024-12-08 14:15:05.245278] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:20:02.550 [2024-12-08 14:15:05.245284] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:20:02.550 [2024-12-08 14:15:05.245291] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:20:02.550 [2024-12-08 14:15:05.245298] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:20:02.550 [2024-12-08 14:15:05.245305] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:20:02.550 [2024-12-08 14:15:05.245312] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:20:02.550 [2024-12-08 14:15:05.245319] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:20:02.550 [2024-12-08 14:15:05.245325] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:20:02.550 [2024-12-08 14:15:05.245332] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:20:02.550 [2024-12-08 14:15:05.245339] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:20:02.550 [2024-12-08 14:15:05.245346] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:20:02.550 [2024-12-08 14:15:05.245353] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:20:02.550 [2024-12-08 14:15:05.245359] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:02.550 [2024-12-08 14:15:05.245367] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:02.550 [2024-12-08 14:15:05.245375] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:02.550 [2024-12-08 14:15:05.245382] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:02.550 [2024-12-08 14:15:05.245389] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:02.550 [2024-12-08 14:15:05.245395] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:02.550 [2024-12-08 14:15:05.245403] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.550 [2024-12-08 14:15:05.245410] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:02.550 [2024-12-08 14:15:05.245418] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.586 ms 00:20:02.550 [2024-12-08 14:15:05.245427] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.550 [2024-12-08 14:15:05.261433] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.550 [2024-12-08 14:15:05.261473] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:02.550 [2024-12-08 14:15:05.261485] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.968 ms 00:20:02.550 [2024-12-08 14:15:05.261499] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.550 [2024-12-08 14:15:05.261586] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.550 [2024-12-08 14:15:05.261595] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:02.550 [2024-12-08 14:15:05.261604] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:20:02.550 [2024-12-08 14:15:05.261613] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.550 [2024-12-08 14:15:05.303102] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.550 [2024-12-08 14:15:05.303318] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:02.550 [2024-12-08 14:15:05.303342] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.441 ms 00:20:02.550 [2024-12-08 14:15:05.303351] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.550 [2024-12-08 14:15:05.303402] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.550 [2024-12-08 14:15:05.303412] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:02.550 [2024-12-08 14:15:05.303421] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:02.550 [2024-12-08 14:15:05.303429] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.550 [2024-12-08 14:15:05.303962] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.550 [2024-12-08 14:15:05.304019] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:02.550 [2024-12-08 14:15:05.304031] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.481 ms 00:20:02.550 [2024-12-08 14:15:05.304045] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.550 [2024-12-08 14:15:05.304168] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.550 [2024-12-08 14:15:05.304177] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:02.550 [2024-12-08 14:15:05.304186] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:20:02.550 [2024-12-08 14:15:05.304194] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.550 [2024-12-08 14:15:05.320921] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.550 [2024-12-08 14:15:05.320969] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:02.550 [2024-12-08 14:15:05.321013] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.703 ms 00:20:02.550 [2024-12-08 14:15:05.321022] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.550 [2024-12-08 14:15:05.335306] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:02.550 [2024-12-08 14:15:05.335356] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:02.550 [2024-12-08 14:15:05.335370] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.550 [2024-12-08 14:15:05.335377] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:02.550 [2024-12-08 14:15:05.335388] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.222 ms 00:20:02.550 [2024-12-08 14:15:05.335395] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.550 [2024-12-08 14:15:05.361880] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.550 [2024-12-08 14:15:05.362069] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:02.550 [2024-12-08 14:15:05.362140] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.429 ms 00:20:02.550 [2024-12-08 14:15:05.362173] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.550 [2024-12-08 14:15:05.375577] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.550 [2024-12-08 14:15:05.375741] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:02.550 [2024-12-08 14:15:05.375805] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.319 ms 00:20:02.550 [2024-12-08 14:15:05.375827] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.550 [2024-12-08 14:15:05.388707] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.550 [2024-12-08 14:15:05.388877] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:02.550 [2024-12-08 14:15:05.388937] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.814 ms 00:20:02.550 [2024-12-08 14:15:05.388948] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.551 [2024-12-08 14:15:05.389767] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.551 [2024-12-08 14:15:05.389823] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:02.551 [2024-12-08 14:15:05.389837] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:20:02.551 [2024-12-08 14:15:05.389846] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.551 [2024-12-08 14:15:05.456188] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.551 [2024-12-08 14:15:05.456250] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:02.551 [2024-12-08 14:15:05.456265] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.317 ms 00:20:02.551 [2024-12-08 14:15:05.456275] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.812 [2024-12-08 14:15:05.468243] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:02.812 [2024-12-08 14:15:05.471251] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.812 [2024-12-08 14:15:05.471296] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:02.812 [2024-12-08 14:15:05.471308] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.917 ms 00:20:02.812 [2024-12-08 14:15:05.471324] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.812 [2024-12-08 14:15:05.471399] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.812 [2024-12-08 14:15:05.471410] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:02.812 [2024-12-08 14:15:05.471419] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:02.812 [2024-12-08 14:15:05.471428] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.812 [2024-12-08 14:15:05.471498] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.812 [2024-12-08 14:15:05.471510] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:02.812 [2024-12-08 14:15:05.471518] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:20:02.812 [2024-12-08 14:15:05.471527] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.812 [2024-12-08 14:15:05.472959] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.812 [2024-12-08 14:15:05.473021] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:20:02.812 [2024-12-08 14:15:05.473033] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.410 ms 00:20:02.812 [2024-12-08 14:15:05.473040] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.812 [2024-12-08 14:15:05.473075] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.812 [2024-12-08 14:15:05.473084] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:02.812 [2024-12-08 14:15:05.473099] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:02.812 [2024-12-08 14:15:05.473132] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.812 [2024-12-08 14:15:05.473173] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:02.812 [2024-12-08 14:15:05.473196] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.812 [2024-12-08 14:15:05.473209] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:02.812 [2024-12-08 14:15:05.473218] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:20:02.812 [2024-12-08 14:15:05.473226] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.812 [2024-12-08 14:15:05.499765] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.812 [2024-12-08 14:15:05.499814] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:02.812 [2024-12-08 14:15:05.499827] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.515 ms 00:20:02.812 [2024-12-08 14:15:05.499836] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.812 [2024-12-08 14:15:05.499933] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.812 [2024-12-08 14:15:05.499943] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:02.812 [2024-12-08 14:15:05.499954] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:20:02.812 [2024-12-08 14:15:05.499963] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.812 [2024-12-08 14:15:05.501271] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 282.376 ms, result 0 00:20:03.754  [2024-12-08T14:15:07.613Z] Copying: 21/1024 [MB] (21 MBps) [2024-12-08T14:15:08.555Z] Copying: 32/1024 [MB] (10 MBps) [2024-12-08T14:15:09.938Z] Copying: 60/1024 [MB] (27 MBps) [2024-12-08T14:15:10.882Z] Copying: 79/1024 [MB] (18 MBps) [2024-12-08T14:15:11.825Z] Copying: 100/1024 [MB] (21 MBps) [2024-12-08T14:15:12.768Z] Copying: 115/1024 [MB] (15 MBps) [2024-12-08T14:15:13.714Z] Copying: 133/1024 [MB] (17 MBps) [2024-12-08T14:15:14.659Z] Copying: 153/1024 [MB] (19 MBps) [2024-12-08T14:15:15.599Z] Copying: 163/1024 [MB] (10 MBps) [2024-12-08T14:15:16.539Z] Copying: 176/1024 [MB] (12 MBps) [2024-12-08T14:15:17.928Z] Copying: 188/1024 [MB] (12 MBps) [2024-12-08T14:15:18.872Z] Copying: 205/1024 [MB] (16 MBps) [2024-12-08T14:15:19.818Z] Copying: 215/1024 [MB] (10 MBps) [2024-12-08T14:15:20.763Z] Copying: 225/1024 [MB] (10 MBps) [2024-12-08T14:15:21.768Z] Copying: 235/1024 [MB] (10 MBps) [2024-12-08T14:15:22.705Z] Copying: 245/1024 [MB] (10 MBps) [2024-12-08T14:15:23.649Z] Copying: 276/1024 [MB] (30 MBps) [2024-12-08T14:15:24.594Z] Copying: 289/1024 [MB] (13 MBps) [2024-12-08T14:15:25.539Z] Copying: 305/1024 [MB] (16 MBps) [2024-12-08T14:15:26.926Z] Copying: 323368/1048576 [kB] (10188 kBps) [2024-12-08T14:15:27.867Z] Copying: 329/1024 [MB] (13 MBps) [2024-12-08T14:15:28.811Z] Copying: 372/1024 [MB] (43 MBps) [2024-12-08T14:15:29.755Z] Copying: 389/1024 [MB] (16 MBps) [2024-12-08T14:15:30.696Z] Copying: 403/1024 [MB] (14 MBps) [2024-12-08T14:15:31.639Z] Copying: 422/1024 [MB] (18 MBps) [2024-12-08T14:15:32.583Z] Copying: 432/1024 [MB] (10 MBps) [2024-12-08T14:15:33.522Z] Copying: 443/1024 [MB] (11 MBps) [2024-12-08T14:15:34.891Z] Copying: 470/1024 [MB] (27 MBps) [2024-12-08T14:15:35.831Z] Copying: 522/1024 [MB] (51 MBps) [2024-12-08T14:15:36.771Z] Copying: 557/1024 [MB] (35 MBps) [2024-12-08T14:15:37.711Z] Copying: 577/1024 [MB] (20 MBps) [2024-12-08T14:15:38.651Z] Copying: 598/1024 [MB] (20 MBps) [2024-12-08T14:15:39.592Z] Copying: 629/1024 [MB] (31 MBps) [2024-12-08T14:15:40.535Z] Copying: 650/1024 [MB] (21 MBps) [2024-12-08T14:15:41.933Z] Copying: 671/1024 [MB] (20 MBps) [2024-12-08T14:15:42.876Z] Copying: 697/1024 [MB] (26 MBps) [2024-12-08T14:15:43.820Z] Copying: 719/1024 [MB] (21 MBps) [2024-12-08T14:15:44.759Z] Copying: 737/1024 [MB] (18 MBps) [2024-12-08T14:15:45.701Z] Copying: 762/1024 [MB] (24 MBps) [2024-12-08T14:15:46.644Z] Copying: 779/1024 [MB] (16 MBps) [2024-12-08T14:15:47.586Z] Copying: 800/1024 [MB] (21 MBps) [2024-12-08T14:15:48.531Z] Copying: 820/1024 [MB] (19 MBps) [2024-12-08T14:15:49.917Z] Copying: 836/1024 [MB] (15 MBps) [2024-12-08T14:15:50.860Z] Copying: 852/1024 [MB] (16 MBps) [2024-12-08T14:15:51.801Z] Copying: 864/1024 [MB] (11 MBps) [2024-12-08T14:15:52.776Z] Copying: 876/1024 [MB] (12 MBps) [2024-12-08T14:15:53.807Z] Copying: 889/1024 [MB] (12 MBps) [2024-12-08T14:15:54.747Z] Copying: 899/1024 [MB] (10 MBps) [2024-12-08T14:15:55.693Z] Copying: 910/1024 [MB] (10 MBps) [2024-12-08T14:15:56.639Z] Copying: 923/1024 [MB] (13 MBps) [2024-12-08T14:15:57.586Z] Copying: 933/1024 [MB] (10 MBps) [2024-12-08T14:15:58.532Z] Copying: 943/1024 [MB] (10 MBps) [2024-12-08T14:15:59.921Z] Copying: 976536/1048576 [kB] (10072 kBps) [2024-12-08T14:16:00.861Z] Copying: 964/1024 [MB] (10 MBps) [2024-12-08T14:16:01.800Z] Copying: 976/1024 [MB] (11 MBps) [2024-12-08T14:16:02.743Z] Copying: 987/1024 [MB] (11 MBps) [2024-12-08T14:16:03.697Z] Copying: 999/1024 [MB] (11 MBps) [2024-12-08T14:16:04.642Z] Copying: 1010/1024 [MB] (11 MBps) [2024-12-08T14:16:05.216Z] Copying: 1022/1024 [MB] (11 MBps) [2024-12-08T14:16:05.216Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-12-08 14:16:05.176493] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.296 [2024-12-08 14:16:05.176552] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:02.296 [2024-12-08 14:16:05.176567] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:02.296 [2024-12-08 14:16:05.176575] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.296 [2024-12-08 14:16:05.176687] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:02.296 [2024-12-08 14:16:05.179392] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.296 [2024-12-08 14:16:05.179424] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:02.296 [2024-12-08 14:16:05.179434] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.688 ms 00:21:02.296 [2024-12-08 14:16:05.179442] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.296 [2024-12-08 14:16:05.190908] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.296 [2024-12-08 14:16:05.191049] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:02.296 [2024-12-08 14:16:05.191073] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.208 ms 00:21:02.296 [2024-12-08 14:16:05.191081] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.556 [2024-12-08 14:16:05.215572] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.556 [2024-12-08 14:16:05.215606] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:02.556 [2024-12-08 14:16:05.215617] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.473 ms 00:21:02.556 [2024-12-08 14:16:05.215624] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.556 [2024-12-08 14:16:05.221730] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.556 [2024-12-08 14:16:05.221758] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:21:02.556 [2024-12-08 14:16:05.221769] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.083 ms 00:21:02.556 [2024-12-08 14:16:05.221783] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.556 [2024-12-08 14:16:05.246566] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.556 [2024-12-08 14:16:05.246605] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:02.556 [2024-12-08 14:16:05.246617] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.739 ms 00:21:02.556 [2024-12-08 14:16:05.246625] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.556 [2024-12-08 14:16:05.261119] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.556 [2024-12-08 14:16:05.261274] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:02.556 [2024-12-08 14:16:05.261293] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.458 ms 00:21:02.556 [2024-12-08 14:16:05.261301] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.819 [2024-12-08 14:16:05.497408] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.819 [2024-12-08 14:16:05.497459] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:02.819 [2024-12-08 14:16:05.497472] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 236.019 ms 00:21:02.819 [2024-12-08 14:16:05.497480] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.819 [2024-12-08 14:16:05.523802] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.819 [2024-12-08 14:16:05.523848] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:02.819 [2024-12-08 14:16:05.523861] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.299 ms 00:21:02.819 [2024-12-08 14:16:05.523868] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.819 [2024-12-08 14:16:05.549838] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.819 [2024-12-08 14:16:05.549883] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:02.819 [2024-12-08 14:16:05.549909] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.924 ms 00:21:02.819 [2024-12-08 14:16:05.549916] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.819 [2024-12-08 14:16:05.575193] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.819 [2024-12-08 14:16:05.575371] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:02.819 [2024-12-08 14:16:05.575392] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.231 ms 00:21:02.819 [2024-12-08 14:16:05.575401] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.819 [2024-12-08 14:16:05.600739] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.819 [2024-12-08 14:16:05.600784] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:02.819 [2024-12-08 14:16:05.600796] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.197 ms 00:21:02.819 [2024-12-08 14:16:05.600804] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.819 [2024-12-08 14:16:05.600848] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:02.819 [2024-12-08 14:16:05.600863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 81664 / 261120 wr_cnt: 1 state: open 00:21:02.819 [2024-12-08 14:16:05.600874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:02.819 [2024-12-08 14:16:05.600882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:02.819 [2024-12-08 14:16:05.600891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:02.819 [2024-12-08 14:16:05.600899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:02.819 [2024-12-08 14:16:05.600907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:02.819 [2024-12-08 14:16:05.600915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:02.819 [2024-12-08 14:16:05.600923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:02.819 [2024-12-08 14:16:05.600932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:02.819 [2024-12-08 14:16:05.600940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:02.819 [2024-12-08 14:16:05.600947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:02.819 [2024-12-08 14:16:05.600955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:02.819 [2024-12-08 14:16:05.600963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:02.819 [2024-12-08 14:16:05.600971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:02.819 [2024-12-08 14:16:05.600978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:02.819 [2024-12-08 14:16:05.601008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:02.819 [2024-12-08 14:16:05.601016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:02.819 [2024-12-08 14:16:05.601024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:02.819 [2024-12-08 14:16:05.601032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:02.819 [2024-12-08 14:16:05.601040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:02.819 [2024-12-08 14:16:05.601048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:02.819 [2024-12-08 14:16:05.601056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:02.819 [2024-12-08 14:16:05.601064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:02.819 [2024-12-08 14:16:05.601072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:02.819 [2024-12-08 14:16:05.601079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:02.820 [2024-12-08 14:16:05.601712] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:02.820 [2024-12-08 14:16:05.601721] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: aa53147e-1f9e-4e05-b079-7ab61488c7ac 00:21:02.820 [2024-12-08 14:16:05.601729] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 81664 00:21:02.820 [2024-12-08 14:16:05.601736] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 82624 00:21:02.820 [2024-12-08 14:16:05.601744] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 81664 00:21:02.820 [2024-12-08 14:16:05.601759] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0118 00:21:02.820 [2024-12-08 14:16:05.601767] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:02.820 [2024-12-08 14:16:05.601775] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:02.820 [2024-12-08 14:16:05.601782] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:02.820 [2024-12-08 14:16:05.601796] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:02.820 [2024-12-08 14:16:05.601803] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:02.820 [2024-12-08 14:16:05.601810] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.820 [2024-12-08 14:16:05.601818] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:02.820 [2024-12-08 14:16:05.601827] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.963 ms 00:21:02.821 [2024-12-08 14:16:05.601834] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.821 [2024-12-08 14:16:05.615379] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.821 [2024-12-08 14:16:05.615427] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:02.821 [2024-12-08 14:16:05.615439] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.510 ms 00:21:02.821 [2024-12-08 14:16:05.615446] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.821 [2024-12-08 14:16:05.615677] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.821 [2024-12-08 14:16:05.615687] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:02.821 [2024-12-08 14:16:05.615696] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:21:02.821 [2024-12-08 14:16:05.615704] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.821 [2024-12-08 14:16:05.655010] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:02.821 [2024-12-08 14:16:05.655057] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:02.821 [2024-12-08 14:16:05.655069] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:02.821 [2024-12-08 14:16:05.655078] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.821 [2024-12-08 14:16:05.655139] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:02.821 [2024-12-08 14:16:05.655147] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:02.821 [2024-12-08 14:16:05.655155] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:02.821 [2024-12-08 14:16:05.655163] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.821 [2024-12-08 14:16:05.655240] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:02.821 [2024-12-08 14:16:05.655258] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:02.821 [2024-12-08 14:16:05.655266] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:02.821 [2024-12-08 14:16:05.655274] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.821 [2024-12-08 14:16:05.655289] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:02.821 [2024-12-08 14:16:05.655298] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:02.821 [2024-12-08 14:16:05.655306] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:02.821 [2024-12-08 14:16:05.655312] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.082 [2024-12-08 14:16:05.735848] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.082 [2024-12-08 14:16:05.735901] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:03.082 [2024-12-08 14:16:05.735913] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.082 [2024-12-08 14:16:05.735922] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.082 [2024-12-08 14:16:05.768282] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.082 [2024-12-08 14:16:05.768329] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:03.082 [2024-12-08 14:16:05.768341] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.082 [2024-12-08 14:16:05.768349] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.082 [2024-12-08 14:16:05.768415] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.082 [2024-12-08 14:16:05.768424] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:03.082 [2024-12-08 14:16:05.768440] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.082 [2024-12-08 14:16:05.768448] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.082 [2024-12-08 14:16:05.768490] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.082 [2024-12-08 14:16:05.768500] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:03.082 [2024-12-08 14:16:05.768509] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.082 [2024-12-08 14:16:05.768517] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.082 [2024-12-08 14:16:05.768622] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.082 [2024-12-08 14:16:05.768633] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:03.082 [2024-12-08 14:16:05.768641] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.082 [2024-12-08 14:16:05.768652] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.082 [2024-12-08 14:16:05.768683] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.082 [2024-12-08 14:16:05.768692] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:03.082 [2024-12-08 14:16:05.768700] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.082 [2024-12-08 14:16:05.768708] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.082 [2024-12-08 14:16:05.768751] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.082 [2024-12-08 14:16:05.768759] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:03.082 [2024-12-08 14:16:05.768768] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.082 [2024-12-08 14:16:05.768779] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.082 [2024-12-08 14:16:05.768827] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.082 [2024-12-08 14:16:05.768837] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:03.082 [2024-12-08 14:16:05.768846] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.082 [2024-12-08 14:16:05.768854] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.082 [2024-12-08 14:16:05.769023] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 593.848 ms, result 0 00:21:04.469 00:21:04.469 00:21:04.469 14:16:07 -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:21:04.469 [2024-12-08 14:16:07.206686] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:04.469 [2024-12-08 14:16:07.207107] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74960 ] 00:21:04.469 [2024-12-08 14:16:07.359153] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.730 [2024-12-08 14:16:07.590945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:04.992 [2024-12-08 14:16:07.876124] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:04.992 [2024-12-08 14:16:07.876436] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:05.257 [2024-12-08 14:16:08.031477] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.257 [2024-12-08 14:16:08.031538] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:05.257 [2024-12-08 14:16:08.031554] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:05.257 [2024-12-08 14:16:08.031565] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.257 [2024-12-08 14:16:08.031620] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.257 [2024-12-08 14:16:08.031630] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:05.257 [2024-12-08 14:16:08.031639] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:21:05.257 [2024-12-08 14:16:08.031647] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.257 [2024-12-08 14:16:08.031667] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:05.257 [2024-12-08 14:16:08.032470] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:05.257 [2024-12-08 14:16:08.032490] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.257 [2024-12-08 14:16:08.032499] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:05.257 [2024-12-08 14:16:08.032509] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.828 ms 00:21:05.257 [2024-12-08 14:16:08.032518] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.257 [2024-12-08 14:16:08.034316] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:05.257 [2024-12-08 14:16:08.049300] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.257 [2024-12-08 14:16:08.049347] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:05.257 [2024-12-08 14:16:08.049360] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.985 ms 00:21:05.257 [2024-12-08 14:16:08.049368] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.257 [2024-12-08 14:16:08.049450] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.257 [2024-12-08 14:16:08.049461] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:05.257 [2024-12-08 14:16:08.049470] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:21:05.257 [2024-12-08 14:16:08.049477] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.257 [2024-12-08 14:16:08.057699] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.257 [2024-12-08 14:16:08.057741] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:05.257 [2024-12-08 14:16:08.057752] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.145 ms 00:21:05.257 [2024-12-08 14:16:08.057761] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.257 [2024-12-08 14:16:08.057856] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.257 [2024-12-08 14:16:08.057867] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:05.257 [2024-12-08 14:16:08.057876] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:21:05.257 [2024-12-08 14:16:08.057883] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.257 [2024-12-08 14:16:08.057928] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.257 [2024-12-08 14:16:08.057937] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:05.257 [2024-12-08 14:16:08.057946] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:05.257 [2024-12-08 14:16:08.057954] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.257 [2024-12-08 14:16:08.058008] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:05.257 [2024-12-08 14:16:08.062270] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.257 [2024-12-08 14:16:08.062308] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:05.257 [2024-12-08 14:16:08.062319] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.299 ms 00:21:05.257 [2024-12-08 14:16:08.062327] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.257 [2024-12-08 14:16:08.062364] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.257 [2024-12-08 14:16:08.062372] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:05.257 [2024-12-08 14:16:08.062381] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:05.257 [2024-12-08 14:16:08.062391] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.257 [2024-12-08 14:16:08.062441] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:05.257 [2024-12-08 14:16:08.062464] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:21:05.257 [2024-12-08 14:16:08.062501] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:05.257 [2024-12-08 14:16:08.062517] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:21:05.257 [2024-12-08 14:16:08.062593] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:21:05.257 [2024-12-08 14:16:08.062603] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:05.257 [2024-12-08 14:16:08.062617] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:21:05.257 [2024-12-08 14:16:08.062628] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:05.257 [2024-12-08 14:16:08.062636] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:05.257 [2024-12-08 14:16:08.062645] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:05.257 [2024-12-08 14:16:08.062653] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:05.257 [2024-12-08 14:16:08.062661] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:21:05.257 [2024-12-08 14:16:08.062669] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:21:05.257 [2024-12-08 14:16:08.062678] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.257 [2024-12-08 14:16:08.062685] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:05.257 [2024-12-08 14:16:08.062693] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.240 ms 00:21:05.257 [2024-12-08 14:16:08.062699] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.257 [2024-12-08 14:16:08.062761] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.257 [2024-12-08 14:16:08.062770] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:05.257 [2024-12-08 14:16:08.062777] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:21:05.257 [2024-12-08 14:16:08.062784] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.257 [2024-12-08 14:16:08.062856] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:05.257 [2024-12-08 14:16:08.062866] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:05.257 [2024-12-08 14:16:08.062875] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:05.257 [2024-12-08 14:16:08.062883] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:05.257 [2024-12-08 14:16:08.062891] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:05.257 [2024-12-08 14:16:08.062898] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:05.257 [2024-12-08 14:16:08.062904] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:05.257 [2024-12-08 14:16:08.062912] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:05.257 [2024-12-08 14:16:08.062919] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:05.257 [2024-12-08 14:16:08.062926] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:05.257 [2024-12-08 14:16:08.062932] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:05.257 [2024-12-08 14:16:08.062943] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:05.257 [2024-12-08 14:16:08.062950] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:05.257 [2024-12-08 14:16:08.062957] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:05.257 [2024-12-08 14:16:08.062964] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:21:05.257 [2024-12-08 14:16:08.062971] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:05.257 [2024-12-08 14:16:08.063008] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:05.257 [2024-12-08 14:16:08.063016] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:21:05.258 [2024-12-08 14:16:08.063022] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:05.258 [2024-12-08 14:16:08.063030] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:21:05.258 [2024-12-08 14:16:08.063037] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:21:05.258 [2024-12-08 14:16:08.063044] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:21:05.258 [2024-12-08 14:16:08.063051] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:05.258 [2024-12-08 14:16:08.063058] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:05.258 [2024-12-08 14:16:08.063064] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:05.258 [2024-12-08 14:16:08.063071] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:05.258 [2024-12-08 14:16:08.063078] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:21:05.258 [2024-12-08 14:16:08.063084] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:05.258 [2024-12-08 14:16:08.063091] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:05.258 [2024-12-08 14:16:08.063098] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:05.258 [2024-12-08 14:16:08.063105] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:05.258 [2024-12-08 14:16:08.063111] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:05.258 [2024-12-08 14:16:08.063118] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:21:05.258 [2024-12-08 14:16:08.063124] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:05.258 [2024-12-08 14:16:08.063131] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:05.258 [2024-12-08 14:16:08.063137] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:05.258 [2024-12-08 14:16:08.063144] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:05.258 [2024-12-08 14:16:08.063150] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:05.258 [2024-12-08 14:16:08.063157] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:21:05.258 [2024-12-08 14:16:08.063164] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:05.258 [2024-12-08 14:16:08.063169] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:05.258 [2024-12-08 14:16:08.063180] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:05.258 [2024-12-08 14:16:08.063187] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:05.258 [2024-12-08 14:16:08.063198] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:05.258 [2024-12-08 14:16:08.063207] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:05.258 [2024-12-08 14:16:08.063214] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:05.258 [2024-12-08 14:16:08.063221] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:05.258 [2024-12-08 14:16:08.063228] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:05.258 [2024-12-08 14:16:08.063234] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:05.258 [2024-12-08 14:16:08.063240] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:05.258 [2024-12-08 14:16:08.063248] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:05.258 [2024-12-08 14:16:08.063259] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:05.258 [2024-12-08 14:16:08.063267] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:05.258 [2024-12-08 14:16:08.063274] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:21:05.258 [2024-12-08 14:16:08.063281] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:21:05.258 [2024-12-08 14:16:08.063288] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:21:05.258 [2024-12-08 14:16:08.063295] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:21:05.258 [2024-12-08 14:16:08.063302] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:21:05.258 [2024-12-08 14:16:08.063309] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:21:05.258 [2024-12-08 14:16:08.063316] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:21:05.258 [2024-12-08 14:16:08.063324] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:21:05.258 [2024-12-08 14:16:08.063332] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:21:05.258 [2024-12-08 14:16:08.063338] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:21:05.258 [2024-12-08 14:16:08.063346] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:21:05.258 [2024-12-08 14:16:08.063353] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:21:05.258 [2024-12-08 14:16:08.063360] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:05.258 [2024-12-08 14:16:08.063369] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:05.258 [2024-12-08 14:16:08.063377] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:05.258 [2024-12-08 14:16:08.063384] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:05.258 [2024-12-08 14:16:08.063392] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:05.258 [2024-12-08 14:16:08.063400] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:05.258 [2024-12-08 14:16:08.063407] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.258 [2024-12-08 14:16:08.063414] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:05.258 [2024-12-08 14:16:08.063421] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.596 ms 00:21:05.258 [2024-12-08 14:16:08.063428] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.258 [2024-12-08 14:16:08.081684] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.258 [2024-12-08 14:16:08.081731] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:05.258 [2024-12-08 14:16:08.081742] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.215 ms 00:21:05.258 [2024-12-08 14:16:08.081756] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.258 [2024-12-08 14:16:08.081850] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.258 [2024-12-08 14:16:08.081858] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:05.258 [2024-12-08 14:16:08.081866] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:21:05.258 [2024-12-08 14:16:08.081874] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.258 [2024-12-08 14:16:08.128213] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.258 [2024-12-08 14:16:08.128415] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:05.258 [2024-12-08 14:16:08.128437] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.285 ms 00:21:05.258 [2024-12-08 14:16:08.128447] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.258 [2024-12-08 14:16:08.128498] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.258 [2024-12-08 14:16:08.128508] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:05.258 [2024-12-08 14:16:08.128517] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:05.258 [2024-12-08 14:16:08.128525] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.258 [2024-12-08 14:16:08.129165] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.258 [2024-12-08 14:16:08.129188] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:05.258 [2024-12-08 14:16:08.129206] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.586 ms 00:21:05.258 [2024-12-08 14:16:08.129215] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.258 [2024-12-08 14:16:08.129343] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.258 [2024-12-08 14:16:08.129354] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:05.258 [2024-12-08 14:16:08.129362] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:21:05.258 [2024-12-08 14:16:08.129370] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.258 [2024-12-08 14:16:08.145911] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.258 [2024-12-08 14:16:08.145956] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:05.258 [2024-12-08 14:16:08.145968] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.516 ms 00:21:05.258 [2024-12-08 14:16:08.145976] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.258 [2024-12-08 14:16:08.160302] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:21:05.258 [2024-12-08 14:16:08.160367] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:05.258 [2024-12-08 14:16:08.160380] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.258 [2024-12-08 14:16:08.160388] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:05.258 [2024-12-08 14:16:08.160398] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.265 ms 00:21:05.258 [2024-12-08 14:16:08.160406] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.520 [2024-12-08 14:16:08.186481] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.520 [2024-12-08 14:16:08.186527] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:05.520 [2024-12-08 14:16:08.186540] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.021 ms 00:21:05.520 [2024-12-08 14:16:08.186548] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.520 [2024-12-08 14:16:08.199543] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.520 [2024-12-08 14:16:08.199589] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:05.520 [2024-12-08 14:16:08.199603] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.927 ms 00:21:05.520 [2024-12-08 14:16:08.199610] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.520 [2024-12-08 14:16:08.212626] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.520 [2024-12-08 14:16:08.212671] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:05.520 [2024-12-08 14:16:08.212693] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.968 ms 00:21:05.520 [2024-12-08 14:16:08.212700] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.520 [2024-12-08 14:16:08.213146] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.520 [2024-12-08 14:16:08.213163] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:05.520 [2024-12-08 14:16:08.213174] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:21:05.520 [2024-12-08 14:16:08.213183] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.520 [2024-12-08 14:16:08.280226] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.520 [2024-12-08 14:16:08.280284] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:05.520 [2024-12-08 14:16:08.280298] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.023 ms 00:21:05.520 [2024-12-08 14:16:08.280308] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.520 [2024-12-08 14:16:08.291874] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:05.520 [2024-12-08 14:16:08.295110] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.520 [2024-12-08 14:16:08.295152] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:05.520 [2024-12-08 14:16:08.295170] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.741 ms 00:21:05.520 [2024-12-08 14:16:08.295178] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.520 [2024-12-08 14:16:08.295252] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.520 [2024-12-08 14:16:08.295263] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:05.520 [2024-12-08 14:16:08.295272] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:05.520 [2024-12-08 14:16:08.295280] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.520 [2024-12-08 14:16:08.296578] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.520 [2024-12-08 14:16:08.296627] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:05.520 [2024-12-08 14:16:08.296639] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.261 ms 00:21:05.520 [2024-12-08 14:16:08.296654] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.520 [2024-12-08 14:16:08.298054] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.520 [2024-12-08 14:16:08.298321] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:21:05.520 [2024-12-08 14:16:08.298348] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.374 ms 00:21:05.520 [2024-12-08 14:16:08.298356] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.520 [2024-12-08 14:16:08.298399] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.520 [2024-12-08 14:16:08.298418] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:05.520 [2024-12-08 14:16:08.298427] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:05.520 [2024-12-08 14:16:08.298435] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.520 [2024-12-08 14:16:08.298472] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:05.520 [2024-12-08 14:16:08.298485] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.520 [2024-12-08 14:16:08.298493] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:05.520 [2024-12-08 14:16:08.298502] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:05.520 [2024-12-08 14:16:08.298510] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.520 [2024-12-08 14:16:08.325043] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.520 [2024-12-08 14:16:08.325243] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:05.520 [2024-12-08 14:16:08.325266] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.513 ms 00:21:05.520 [2024-12-08 14:16:08.325281] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.520 [2024-12-08 14:16:08.325357] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.521 [2024-12-08 14:16:08.325367] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:05.521 [2024-12-08 14:16:08.325376] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:21:05.521 [2024-12-08 14:16:08.325384] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.521 [2024-12-08 14:16:08.331063] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 297.322 ms, result 0 00:21:06.904  [2024-12-08T14:16:10.766Z] Copying: 8552/1048576 [kB] (8552 kBps) [2024-12-08T14:16:11.703Z] Copying: 19/1024 [MB] (11 MBps) [2024-12-08T14:16:12.656Z] Copying: 32/1024 [MB] (12 MBps) [2024-12-08T14:16:13.601Z] Copying: 57/1024 [MB] (25 MBps) [2024-12-08T14:16:14.543Z] Copying: 78/1024 [MB] (20 MBps) [2024-12-08T14:16:15.930Z] Copying: 99/1024 [MB] (21 MBps) [2024-12-08T14:16:16.873Z] Copying: 119/1024 [MB] (20 MBps) [2024-12-08T14:16:17.818Z] Copying: 138/1024 [MB] (19 MBps) [2024-12-08T14:16:18.763Z] Copying: 160/1024 [MB] (21 MBps) [2024-12-08T14:16:19.707Z] Copying: 179/1024 [MB] (19 MBps) [2024-12-08T14:16:20.650Z] Copying: 199/1024 [MB] (19 MBps) [2024-12-08T14:16:21.593Z] Copying: 213/1024 [MB] (14 MBps) [2024-12-08T14:16:22.537Z] Copying: 227/1024 [MB] (13 MBps) [2024-12-08T14:16:23.963Z] Copying: 244/1024 [MB] (17 MBps) [2024-12-08T14:16:24.553Z] Copying: 264/1024 [MB] (19 MBps) [2024-12-08T14:16:25.940Z] Copying: 275/1024 [MB] (10 MBps) [2024-12-08T14:16:26.880Z] Copying: 285/1024 [MB] (10 MBps) [2024-12-08T14:16:27.843Z] Copying: 296/1024 [MB] (10 MBps) [2024-12-08T14:16:28.788Z] Copying: 306/1024 [MB] (10 MBps) [2024-12-08T14:16:29.732Z] Copying: 317/1024 [MB] (10 MBps) [2024-12-08T14:16:30.678Z] Copying: 327/1024 [MB] (10 MBps) [2024-12-08T14:16:31.625Z] Copying: 340/1024 [MB] (12 MBps) [2024-12-08T14:16:32.569Z] Copying: 351/1024 [MB] (10 MBps) [2024-12-08T14:16:33.513Z] Copying: 367/1024 [MB] (16 MBps) [2024-12-08T14:16:34.902Z] Copying: 384/1024 [MB] (17 MBps) [2024-12-08T14:16:35.846Z] Copying: 402/1024 [MB] (17 MBps) [2024-12-08T14:16:36.788Z] Copying: 419/1024 [MB] (17 MBps) [2024-12-08T14:16:37.726Z] Copying: 444/1024 [MB] (25 MBps) [2024-12-08T14:16:38.665Z] Copying: 464/1024 [MB] (19 MBps) [2024-12-08T14:16:39.606Z] Copying: 488/1024 [MB] (24 MBps) [2024-12-08T14:16:40.548Z] Copying: 499/1024 [MB] (10 MBps) [2024-12-08T14:16:41.940Z] Copying: 509/1024 [MB] (10 MBps) [2024-12-08T14:16:42.513Z] Copying: 520/1024 [MB] (10 MBps) [2024-12-08T14:16:43.903Z] Copying: 531/1024 [MB] (10 MBps) [2024-12-08T14:16:44.873Z] Copying: 541/1024 [MB] (10 MBps) [2024-12-08T14:16:45.819Z] Copying: 558/1024 [MB] (17 MBps) [2024-12-08T14:16:46.762Z] Copying: 578/1024 [MB] (19 MBps) [2024-12-08T14:16:47.706Z] Copying: 600/1024 [MB] (22 MBps) [2024-12-08T14:16:48.650Z] Copying: 622/1024 [MB] (21 MBps) [2024-12-08T14:16:49.596Z] Copying: 644/1024 [MB] (22 MBps) [2024-12-08T14:16:50.537Z] Copying: 665/1024 [MB] (20 MBps) [2024-12-08T14:16:51.926Z] Copying: 687/1024 [MB] (22 MBps) [2024-12-08T14:16:52.871Z] Copying: 708/1024 [MB] (20 MBps) [2024-12-08T14:16:53.813Z] Copying: 730/1024 [MB] (21 MBps) [2024-12-08T14:16:54.753Z] Copying: 751/1024 [MB] (21 MBps) [2024-12-08T14:16:55.701Z] Copying: 771/1024 [MB] (20 MBps) [2024-12-08T14:16:56.693Z] Copying: 793/1024 [MB] (21 MBps) [2024-12-08T14:16:57.638Z] Copying: 816/1024 [MB] (22 MBps) [2024-12-08T14:16:58.580Z] Copying: 836/1024 [MB] (20 MBps) [2024-12-08T14:16:59.524Z] Copying: 851/1024 [MB] (14 MBps) [2024-12-08T14:17:00.929Z] Copying: 873/1024 [MB] (21 MBps) [2024-12-08T14:17:01.873Z] Copying: 893/1024 [MB] (20 MBps) [2024-12-08T14:17:02.818Z] Copying: 904/1024 [MB] (11 MBps) [2024-12-08T14:17:03.763Z] Copying: 915/1024 [MB] (10 MBps) [2024-12-08T14:17:04.707Z] Copying: 925/1024 [MB] (10 MBps) [2024-12-08T14:17:05.660Z] Copying: 939/1024 [MB] (13 MBps) [2024-12-08T14:17:06.605Z] Copying: 954/1024 [MB] (14 MBps) [2024-12-08T14:17:07.552Z] Copying: 967/1024 [MB] (12 MBps) [2024-12-08T14:17:08.940Z] Copying: 986/1024 [MB] (19 MBps) [2024-12-08T14:17:09.513Z] Copying: 998/1024 [MB] (11 MBps) [2024-12-08T14:17:10.898Z] Copying: 1008/1024 [MB] (10 MBps) [2024-12-08T14:17:10.898Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-12-08 14:17:10.816457] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.978 [2024-12-08 14:17:10.816562] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:07.978 [2024-12-08 14:17:10.816578] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:07.978 [2024-12-08 14:17:10.816588] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.978 [2024-12-08 14:17:10.816612] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:07.978 [2024-12-08 14:17:10.819617] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.978 [2024-12-08 14:17:10.819659] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:07.978 [2024-12-08 14:17:10.819670] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.987 ms 00:22:07.978 [2024-12-08 14:17:10.819679] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.978 [2024-12-08 14:17:10.819930] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.978 [2024-12-08 14:17:10.819945] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:07.978 [2024-12-08 14:17:10.819954] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.223 ms 00:22:07.978 [2024-12-08 14:17:10.819962] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.978 [2024-12-08 14:17:10.827907] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.978 [2024-12-08 14:17:10.827952] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:07.978 [2024-12-08 14:17:10.827963] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.928 ms 00:22:07.978 [2024-12-08 14:17:10.827972] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.978 [2024-12-08 14:17:10.834149] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.978 [2024-12-08 14:17:10.834185] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:22:07.978 [2024-12-08 14:17:10.834205] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.124 ms 00:22:07.978 [2024-12-08 14:17:10.834213] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.978 [2024-12-08 14:17:10.860957] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.978 [2024-12-08 14:17:10.861007] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:07.978 [2024-12-08 14:17:10.861021] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.690 ms 00:22:07.978 [2024-12-08 14:17:10.861029] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.978 [2024-12-08 14:17:10.877346] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.978 [2024-12-08 14:17:10.877388] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:07.978 [2024-12-08 14:17:10.877400] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.261 ms 00:22:07.978 [2024-12-08 14:17:10.877409] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.239 [2024-12-08 14:17:11.068393] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.239 [2024-12-08 14:17:11.068436] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:08.239 [2024-12-08 14:17:11.068448] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 190.935 ms 00:22:08.239 [2024-12-08 14:17:11.068456] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.239 [2024-12-08 14:17:11.094396] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.239 [2024-12-08 14:17:11.094433] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:08.239 [2024-12-08 14:17:11.094445] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.916 ms 00:22:08.239 [2024-12-08 14:17:11.094452] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.239 [2024-12-08 14:17:11.119498] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.239 [2024-12-08 14:17:11.119535] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:08.239 [2024-12-08 14:17:11.119547] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.003 ms 00:22:08.239 [2024-12-08 14:17:11.119568] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.239 [2024-12-08 14:17:11.144589] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.239 [2024-12-08 14:17:11.144625] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:08.239 [2024-12-08 14:17:11.144636] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.977 ms 00:22:08.239 [2024-12-08 14:17:11.144644] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.503 [2024-12-08 14:17:11.169475] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.503 [2024-12-08 14:17:11.169513] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:08.503 [2024-12-08 14:17:11.169524] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.750 ms 00:22:08.503 [2024-12-08 14:17:11.169531] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.503 [2024-12-08 14:17:11.169573] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:08.504 [2024-12-08 14:17:11.169589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133888 / 261120 wr_cnt: 1 state: open 00:22:08.504 [2024-12-08 14:17:11.169601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.169609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.169617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.169625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.169634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.169642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.169650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.169658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.169667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.169675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.169684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.169692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.169700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.169708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.169716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.169726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.169734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.169741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.169750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.169758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.169766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.169773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.169781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.169788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.169796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.169803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.169810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.169817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.169824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.169832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.169839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.169846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.169853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.169861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.169868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.169875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.169882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.169890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.169898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.169905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.169912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.169919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.169927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.169940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.169947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.169954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.169962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.170004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.170012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.170034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.170042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.170050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.170058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.170066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.170073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.170081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.170089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.170097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.170105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.170113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.170120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.170128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.170136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:08.504 [2024-12-08 14:17:11.170144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:08.505 [2024-12-08 14:17:11.170151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:08.505 [2024-12-08 14:17:11.170158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:08.505 [2024-12-08 14:17:11.170167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:08.505 [2024-12-08 14:17:11.170174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:08.505 [2024-12-08 14:17:11.170182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:08.505 [2024-12-08 14:17:11.170189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:08.505 [2024-12-08 14:17:11.170197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:08.505 [2024-12-08 14:17:11.170205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:08.505 [2024-12-08 14:17:11.170212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:08.505 [2024-12-08 14:17:11.170219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:08.505 [2024-12-08 14:17:11.170227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:08.505 [2024-12-08 14:17:11.170236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:08.505 [2024-12-08 14:17:11.170244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:08.505 [2024-12-08 14:17:11.170252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:08.505 [2024-12-08 14:17:11.170259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:08.505 [2024-12-08 14:17:11.170267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:08.505 [2024-12-08 14:17:11.170275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:08.505 [2024-12-08 14:17:11.170282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:08.505 [2024-12-08 14:17:11.170292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:08.505 [2024-12-08 14:17:11.170300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:08.505 [2024-12-08 14:17:11.170308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:08.505 [2024-12-08 14:17:11.170316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:08.505 [2024-12-08 14:17:11.170324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:08.505 [2024-12-08 14:17:11.170332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:08.505 [2024-12-08 14:17:11.170339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:08.505 [2024-12-08 14:17:11.170347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:08.505 [2024-12-08 14:17:11.170354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:08.505 [2024-12-08 14:17:11.170362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:08.505 [2024-12-08 14:17:11.170371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:08.505 [2024-12-08 14:17:11.170378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:08.505 [2024-12-08 14:17:11.170386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:08.505 [2024-12-08 14:17:11.170393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:08.505 [2024-12-08 14:17:11.170401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:08.505 [2024-12-08 14:17:11.170409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:08.505 [2024-12-08 14:17:11.170417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:08.505 [2024-12-08 14:17:11.170432] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:08.505 [2024-12-08 14:17:11.170441] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: aa53147e-1f9e-4e05-b079-7ab61488c7ac 00:22:08.505 [2024-12-08 14:17:11.170449] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133888 00:22:08.505 [2024-12-08 14:17:11.170457] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 53184 00:22:08.505 [2024-12-08 14:17:11.170470] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 52224 00:22:08.505 [2024-12-08 14:17:11.170479] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0184 00:22:08.505 [2024-12-08 14:17:11.170486] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:08.505 [2024-12-08 14:17:11.170494] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:08.505 [2024-12-08 14:17:11.170505] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:08.505 [2024-12-08 14:17:11.170512] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:08.505 [2024-12-08 14:17:11.170525] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:08.505 [2024-12-08 14:17:11.170533] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.505 [2024-12-08 14:17:11.170542] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:08.505 [2024-12-08 14:17:11.170551] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.962 ms 00:22:08.505 [2024-12-08 14:17:11.170558] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.505 [2024-12-08 14:17:11.184188] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.505 [2024-12-08 14:17:11.184226] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:08.505 [2024-12-08 14:17:11.184237] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.598 ms 00:22:08.505 [2024-12-08 14:17:11.184246] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.505 [2024-12-08 14:17:11.184468] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.505 [2024-12-08 14:17:11.184477] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:08.505 [2024-12-08 14:17:11.184485] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.188 ms 00:22:08.505 [2024-12-08 14:17:11.184494] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.505 [2024-12-08 14:17:11.223088] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.505 [2024-12-08 14:17:11.223129] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:08.505 [2024-12-08 14:17:11.223140] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.505 [2024-12-08 14:17:11.223148] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.505 [2024-12-08 14:17:11.223219] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.505 [2024-12-08 14:17:11.223229] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:08.505 [2024-12-08 14:17:11.223237] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.505 [2024-12-08 14:17:11.223246] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.505 [2024-12-08 14:17:11.223321] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.505 [2024-12-08 14:17:11.223332] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:08.505 [2024-12-08 14:17:11.223341] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.505 [2024-12-08 14:17:11.223349] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.505 [2024-12-08 14:17:11.223365] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.505 [2024-12-08 14:17:11.223373] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:08.505 [2024-12-08 14:17:11.223381] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.505 [2024-12-08 14:17:11.223389] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.505 [2024-12-08 14:17:11.308564] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.505 [2024-12-08 14:17:11.308620] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:08.506 [2024-12-08 14:17:11.308632] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.506 [2024-12-08 14:17:11.308641] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.506 [2024-12-08 14:17:11.341256] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.506 [2024-12-08 14:17:11.341304] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:08.506 [2024-12-08 14:17:11.341315] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.506 [2024-12-08 14:17:11.341323] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.506 [2024-12-08 14:17:11.341387] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.506 [2024-12-08 14:17:11.341404] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:08.506 [2024-12-08 14:17:11.341413] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.506 [2024-12-08 14:17:11.341421] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.506 [2024-12-08 14:17:11.341463] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.506 [2024-12-08 14:17:11.341473] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:08.506 [2024-12-08 14:17:11.341483] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.506 [2024-12-08 14:17:11.341492] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.506 [2024-12-08 14:17:11.341593] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.506 [2024-12-08 14:17:11.341605] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:08.506 [2024-12-08 14:17:11.341618] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.506 [2024-12-08 14:17:11.341627] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.506 [2024-12-08 14:17:11.341658] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.506 [2024-12-08 14:17:11.341669] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:08.506 [2024-12-08 14:17:11.341679] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.506 [2024-12-08 14:17:11.341687] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.506 [2024-12-08 14:17:11.341728] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.506 [2024-12-08 14:17:11.341739] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:08.506 [2024-12-08 14:17:11.341753] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.506 [2024-12-08 14:17:11.341761] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.506 [2024-12-08 14:17:11.341807] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.506 [2024-12-08 14:17:11.341819] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:08.506 [2024-12-08 14:17:11.341828] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.506 [2024-12-08 14:17:11.341837] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.506 [2024-12-08 14:17:11.341969] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 525.478 ms, result 0 00:22:09.445 00:22:09.445 00:22:09.445 14:17:12 -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:11.988 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:11.988 14:17:14 -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:22:11.988 14:17:14 -- ftl/restore.sh@85 -- # restore_kill 00:22:11.988 14:17:14 -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:11.988 14:17:14 -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:11.988 14:17:14 -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:11.988 14:17:14 -- ftl/restore.sh@32 -- # killprocess 72823 00:22:11.988 Process with pid 72823 is not found 00:22:11.988 Remove shared memory files 00:22:11.988 14:17:14 -- common/autotest_common.sh@936 -- # '[' -z 72823 ']' 00:22:11.988 14:17:14 -- common/autotest_common.sh@940 -- # kill -0 72823 00:22:11.988 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (72823) - No such process 00:22:11.988 14:17:14 -- common/autotest_common.sh@963 -- # echo 'Process with pid 72823 is not found' 00:22:11.988 14:17:14 -- ftl/restore.sh@33 -- # remove_shm 00:22:11.988 14:17:14 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:11.988 14:17:14 -- ftl/common.sh@205 -- # rm -f rm -f 00:22:11.988 14:17:14 -- ftl/common.sh@206 -- # rm -f rm -f 00:22:11.988 14:17:14 -- ftl/common.sh@207 -- # rm -f rm -f 00:22:11.988 14:17:14 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:11.988 14:17:14 -- ftl/common.sh@209 -- # rm -f rm -f 00:22:11.988 00:22:11.988 real 4m30.897s 00:22:11.988 user 4m18.095s 00:22:11.988 sys 0m12.638s 00:22:11.988 14:17:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:11.988 14:17:14 -- common/autotest_common.sh@10 -- # set +x 00:22:11.988 ************************************ 00:22:11.988 END TEST ftl_restore 00:22:11.988 ************************************ 00:22:11.988 14:17:14 -- ftl/ftl.sh@78 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:06.0 0000:00:07.0 00:22:11.988 14:17:14 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:22:11.988 14:17:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:11.988 14:17:14 -- common/autotest_common.sh@10 -- # set +x 00:22:11.988 ************************************ 00:22:11.988 START TEST ftl_dirty_shutdown 00:22:11.988 ************************************ 00:22:11.988 14:17:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:06.0 0000:00:07.0 00:22:11.988 * Looking for test storage... 00:22:11.988 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:11.988 14:17:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:11.988 14:17:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:11.988 14:17:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:11.988 14:17:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:11.988 14:17:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:11.988 14:17:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:11.988 14:17:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:11.988 14:17:14 -- scripts/common.sh@335 -- # IFS=.-: 00:22:11.988 14:17:14 -- scripts/common.sh@335 -- # read -ra ver1 00:22:11.988 14:17:14 -- scripts/common.sh@336 -- # IFS=.-: 00:22:11.988 14:17:14 -- scripts/common.sh@336 -- # read -ra ver2 00:22:11.988 14:17:14 -- scripts/common.sh@337 -- # local 'op=<' 00:22:11.988 14:17:14 -- scripts/common.sh@339 -- # ver1_l=2 00:22:11.988 14:17:14 -- scripts/common.sh@340 -- # ver2_l=1 00:22:11.988 14:17:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:11.988 14:17:14 -- scripts/common.sh@343 -- # case "$op" in 00:22:11.988 14:17:14 -- scripts/common.sh@344 -- # : 1 00:22:11.988 14:17:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:11.988 14:17:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:11.988 14:17:14 -- scripts/common.sh@364 -- # decimal 1 00:22:11.988 14:17:14 -- scripts/common.sh@352 -- # local d=1 00:22:11.988 14:17:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:11.988 14:17:14 -- scripts/common.sh@354 -- # echo 1 00:22:11.988 14:17:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:11.988 14:17:14 -- scripts/common.sh@365 -- # decimal 2 00:22:11.988 14:17:14 -- scripts/common.sh@352 -- # local d=2 00:22:11.988 14:17:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:11.988 14:17:14 -- scripts/common.sh@354 -- # echo 2 00:22:11.988 14:17:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:11.988 14:17:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:11.988 14:17:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:11.988 14:17:14 -- scripts/common.sh@367 -- # return 0 00:22:11.988 14:17:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:11.988 14:17:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:11.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.988 --rc genhtml_branch_coverage=1 00:22:11.988 --rc genhtml_function_coverage=1 00:22:11.988 --rc genhtml_legend=1 00:22:11.988 --rc geninfo_all_blocks=1 00:22:11.988 --rc geninfo_unexecuted_blocks=1 00:22:11.988 00:22:11.988 ' 00:22:11.988 14:17:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:11.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.988 --rc genhtml_branch_coverage=1 00:22:11.988 --rc genhtml_function_coverage=1 00:22:11.988 --rc genhtml_legend=1 00:22:11.988 --rc geninfo_all_blocks=1 00:22:11.988 --rc geninfo_unexecuted_blocks=1 00:22:11.988 00:22:11.988 ' 00:22:11.988 14:17:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:11.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.988 --rc genhtml_branch_coverage=1 00:22:11.988 --rc genhtml_function_coverage=1 00:22:11.988 --rc genhtml_legend=1 00:22:11.988 --rc geninfo_all_blocks=1 00:22:11.988 --rc geninfo_unexecuted_blocks=1 00:22:11.988 00:22:11.988 ' 00:22:11.988 14:17:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:11.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.988 --rc genhtml_branch_coverage=1 00:22:11.988 --rc genhtml_function_coverage=1 00:22:11.988 --rc genhtml_legend=1 00:22:11.988 --rc geninfo_all_blocks=1 00:22:11.988 --rc geninfo_unexecuted_blocks=1 00:22:11.988 00:22:11.988 ' 00:22:11.988 14:17:14 -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:11.988 14:17:14 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:22:11.988 14:17:14 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:11.988 14:17:14 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:11.988 14:17:14 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:11.988 14:17:14 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:11.988 14:17:14 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:11.988 14:17:14 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:11.988 14:17:14 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:11.988 14:17:14 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:11.989 14:17:14 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:11.989 14:17:14 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:11.989 14:17:14 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:11.989 14:17:14 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:11.989 14:17:14 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:11.989 14:17:14 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:11.989 14:17:14 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:11.989 14:17:14 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:11.989 14:17:14 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:11.989 14:17:14 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:11.989 14:17:14 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:11.989 14:17:14 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:11.989 14:17:14 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:11.989 14:17:14 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:11.989 14:17:14 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:11.989 14:17:14 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:11.989 14:17:14 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:11.989 14:17:14 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:11.989 14:17:14 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:11.989 14:17:14 -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:11.989 14:17:14 -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:11.989 14:17:14 -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:11.989 14:17:14 -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:22:11.989 14:17:14 -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:06.0 00:22:11.989 14:17:14 -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:11.989 14:17:14 -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:22:11.989 14:17:14 -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:07.0 00:22:11.989 14:17:14 -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:22:11.989 14:17:14 -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:22:11.989 14:17:14 -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:22:11.989 14:17:14 -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:22:11.989 14:17:14 -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:22:11.989 14:17:14 -- ftl/dirty_shutdown.sh@45 -- # svcpid=75723 00:22:11.989 14:17:14 -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 75723 00:22:11.989 14:17:14 -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:22:11.989 14:17:14 -- common/autotest_common.sh@829 -- # '[' -z 75723 ']' 00:22:11.989 14:17:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:11.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:11.989 14:17:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:11.989 14:17:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:11.989 14:17:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:11.989 14:17:14 -- common/autotest_common.sh@10 -- # set +x 00:22:11.989 [2024-12-08 14:17:14.762815] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:11.989 [2024-12-08 14:17:14.762966] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75723 ] 00:22:12.249 [2024-12-08 14:17:14.912111] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.249 [2024-12-08 14:17:15.130425] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:12.249 [2024-12-08 14:17:15.130664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:13.632 14:17:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:13.632 14:17:16 -- common/autotest_common.sh@862 -- # return 0 00:22:13.632 14:17:16 -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:22:13.632 14:17:16 -- ftl/common.sh@54 -- # local name=nvme0 00:22:13.632 14:17:16 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:22:13.632 14:17:16 -- ftl/common.sh@56 -- # local size=103424 00:22:13.632 14:17:16 -- ftl/common.sh@59 -- # local base_bdev 00:22:13.632 14:17:16 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:22:13.893 14:17:16 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:13.893 14:17:16 -- ftl/common.sh@62 -- # local base_size 00:22:13.893 14:17:16 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:13.893 14:17:16 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:22:13.893 14:17:16 -- common/autotest_common.sh@1368 -- # local bdev_info 00:22:13.893 14:17:16 -- common/autotest_common.sh@1369 -- # local bs 00:22:13.893 14:17:16 -- common/autotest_common.sh@1370 -- # local nb 00:22:13.893 14:17:16 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:13.893 14:17:16 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:22:13.893 { 00:22:13.893 "name": "nvme0n1", 00:22:13.893 "aliases": [ 00:22:13.893 "69a4332f-68ea-40b2-a748-47a01be3db16" 00:22:13.893 ], 00:22:13.893 "product_name": "NVMe disk", 00:22:13.893 "block_size": 4096, 00:22:13.893 "num_blocks": 1310720, 00:22:13.893 "uuid": "69a4332f-68ea-40b2-a748-47a01be3db16", 00:22:13.893 "assigned_rate_limits": { 00:22:13.893 "rw_ios_per_sec": 0, 00:22:13.893 "rw_mbytes_per_sec": 0, 00:22:13.893 "r_mbytes_per_sec": 0, 00:22:13.893 "w_mbytes_per_sec": 0 00:22:13.893 }, 00:22:13.893 "claimed": true, 00:22:13.893 "claim_type": "read_many_write_one", 00:22:13.893 "zoned": false, 00:22:13.893 "supported_io_types": { 00:22:13.893 "read": true, 00:22:13.893 "write": true, 00:22:13.893 "unmap": true, 00:22:13.893 "write_zeroes": true, 00:22:13.893 "flush": true, 00:22:13.893 "reset": true, 00:22:13.893 "compare": true, 00:22:13.893 "compare_and_write": false, 00:22:13.893 "abort": true, 00:22:13.893 "nvme_admin": true, 00:22:13.893 "nvme_io": true 00:22:13.893 }, 00:22:13.893 "driver_specific": { 00:22:13.893 "nvme": [ 00:22:13.893 { 00:22:13.893 "pci_address": "0000:00:07.0", 00:22:13.893 "trid": { 00:22:13.893 "trtype": "PCIe", 00:22:13.893 "traddr": "0000:00:07.0" 00:22:13.893 }, 00:22:13.893 "ctrlr_data": { 00:22:13.893 "cntlid": 0, 00:22:13.893 "vendor_id": "0x1b36", 00:22:13.893 "model_number": "QEMU NVMe Ctrl", 00:22:13.893 "serial_number": "12341", 00:22:13.893 "firmware_revision": "8.0.0", 00:22:13.893 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:13.893 "oacs": { 00:22:13.893 "security": 0, 00:22:13.893 "format": 1, 00:22:13.893 "firmware": 0, 00:22:13.893 "ns_manage": 1 00:22:13.893 }, 00:22:13.893 "multi_ctrlr": false, 00:22:13.893 "ana_reporting": false 00:22:13.893 }, 00:22:13.893 "vs": { 00:22:13.893 "nvme_version": "1.4" 00:22:13.893 }, 00:22:13.893 "ns_data": { 00:22:13.893 "id": 1, 00:22:13.893 "can_share": false 00:22:13.893 } 00:22:13.893 } 00:22:13.893 ], 00:22:13.893 "mp_policy": "active_passive" 00:22:13.893 } 00:22:13.893 } 00:22:13.893 ]' 00:22:13.893 14:17:16 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:22:13.893 14:17:16 -- common/autotest_common.sh@1372 -- # bs=4096 00:22:13.893 14:17:16 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:22:14.154 14:17:16 -- common/autotest_common.sh@1373 -- # nb=1310720 00:22:14.154 14:17:16 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:22:14.154 14:17:16 -- common/autotest_common.sh@1377 -- # echo 5120 00:22:14.154 14:17:16 -- ftl/common.sh@63 -- # base_size=5120 00:22:14.154 14:17:16 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:14.154 14:17:16 -- ftl/common.sh@67 -- # clear_lvols 00:22:14.154 14:17:16 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:14.154 14:17:16 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:14.154 14:17:17 -- ftl/common.sh@28 -- # stores=9b65a27c-f327-42b2-9716-a1975e407e1d 00:22:14.154 14:17:17 -- ftl/common.sh@29 -- # for lvs in $stores 00:22:14.154 14:17:17 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9b65a27c-f327-42b2-9716-a1975e407e1d 00:22:14.416 14:17:17 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:14.675 14:17:17 -- ftl/common.sh@68 -- # lvs=af278f2b-5926-4740-98c8-c5a364c00fd6 00:22:14.675 14:17:17 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u af278f2b-5926-4740-98c8-c5a364c00fd6 00:22:14.934 14:17:17 -- ftl/dirty_shutdown.sh@49 -- # split_bdev=98b261e4-0780-4e8d-92fa-6f114e57dbb4 00:22:14.934 14:17:17 -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:06.0 ']' 00:22:14.934 14:17:17 -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:06.0 98b261e4-0780-4e8d-92fa-6f114e57dbb4 00:22:14.934 14:17:17 -- ftl/common.sh@35 -- # local name=nvc0 00:22:14.934 14:17:17 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:22:14.934 14:17:17 -- ftl/common.sh@37 -- # local base_bdev=98b261e4-0780-4e8d-92fa-6f114e57dbb4 00:22:14.934 14:17:17 -- ftl/common.sh@38 -- # local cache_size= 00:22:14.934 14:17:17 -- ftl/common.sh@41 -- # get_bdev_size 98b261e4-0780-4e8d-92fa-6f114e57dbb4 00:22:14.934 14:17:17 -- common/autotest_common.sh@1367 -- # local bdev_name=98b261e4-0780-4e8d-92fa-6f114e57dbb4 00:22:14.934 14:17:17 -- common/autotest_common.sh@1368 -- # local bdev_info 00:22:14.934 14:17:17 -- common/autotest_common.sh@1369 -- # local bs 00:22:14.934 14:17:17 -- common/autotest_common.sh@1370 -- # local nb 00:22:14.934 14:17:17 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 98b261e4-0780-4e8d-92fa-6f114e57dbb4 00:22:14.934 14:17:17 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:22:14.934 { 00:22:14.934 "name": "98b261e4-0780-4e8d-92fa-6f114e57dbb4", 00:22:14.934 "aliases": [ 00:22:14.934 "lvs/nvme0n1p0" 00:22:14.934 ], 00:22:14.934 "product_name": "Logical Volume", 00:22:14.934 "block_size": 4096, 00:22:14.934 "num_blocks": 26476544, 00:22:14.934 "uuid": "98b261e4-0780-4e8d-92fa-6f114e57dbb4", 00:22:14.934 "assigned_rate_limits": { 00:22:14.934 "rw_ios_per_sec": 0, 00:22:14.934 "rw_mbytes_per_sec": 0, 00:22:14.934 "r_mbytes_per_sec": 0, 00:22:14.934 "w_mbytes_per_sec": 0 00:22:14.934 }, 00:22:14.934 "claimed": false, 00:22:14.934 "zoned": false, 00:22:14.934 "supported_io_types": { 00:22:14.934 "read": true, 00:22:14.934 "write": true, 00:22:14.934 "unmap": true, 00:22:14.934 "write_zeroes": true, 00:22:14.934 "flush": false, 00:22:14.934 "reset": true, 00:22:14.934 "compare": false, 00:22:14.934 "compare_and_write": false, 00:22:14.934 "abort": false, 00:22:14.934 "nvme_admin": false, 00:22:14.934 "nvme_io": false 00:22:14.934 }, 00:22:14.934 "driver_specific": { 00:22:14.934 "lvol": { 00:22:14.934 "lvol_store_uuid": "af278f2b-5926-4740-98c8-c5a364c00fd6", 00:22:14.934 "base_bdev": "nvme0n1", 00:22:14.934 "thin_provision": true, 00:22:14.934 "snapshot": false, 00:22:14.934 "clone": false, 00:22:14.934 "esnap_clone": false 00:22:14.934 } 00:22:14.934 } 00:22:14.934 } 00:22:14.934 ]' 00:22:15.193 14:17:17 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:22:15.193 14:17:17 -- common/autotest_common.sh@1372 -- # bs=4096 00:22:15.193 14:17:17 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:22:15.193 14:17:17 -- common/autotest_common.sh@1373 -- # nb=26476544 00:22:15.193 14:17:17 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:22:15.193 14:17:17 -- common/autotest_common.sh@1377 -- # echo 103424 00:22:15.193 14:17:17 -- ftl/common.sh@41 -- # local base_size=5171 00:22:15.193 14:17:17 -- ftl/common.sh@44 -- # local nvc_bdev 00:22:15.193 14:17:17 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:22:15.451 14:17:18 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:15.451 14:17:18 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:15.451 14:17:18 -- ftl/common.sh@48 -- # get_bdev_size 98b261e4-0780-4e8d-92fa-6f114e57dbb4 00:22:15.451 14:17:18 -- common/autotest_common.sh@1367 -- # local bdev_name=98b261e4-0780-4e8d-92fa-6f114e57dbb4 00:22:15.451 14:17:18 -- common/autotest_common.sh@1368 -- # local bdev_info 00:22:15.451 14:17:18 -- common/autotest_common.sh@1369 -- # local bs 00:22:15.452 14:17:18 -- common/autotest_common.sh@1370 -- # local nb 00:22:15.452 14:17:18 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 98b261e4-0780-4e8d-92fa-6f114e57dbb4 00:22:15.452 14:17:18 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:22:15.452 { 00:22:15.452 "name": "98b261e4-0780-4e8d-92fa-6f114e57dbb4", 00:22:15.452 "aliases": [ 00:22:15.452 "lvs/nvme0n1p0" 00:22:15.452 ], 00:22:15.452 "product_name": "Logical Volume", 00:22:15.452 "block_size": 4096, 00:22:15.452 "num_blocks": 26476544, 00:22:15.452 "uuid": "98b261e4-0780-4e8d-92fa-6f114e57dbb4", 00:22:15.452 "assigned_rate_limits": { 00:22:15.452 "rw_ios_per_sec": 0, 00:22:15.452 "rw_mbytes_per_sec": 0, 00:22:15.452 "r_mbytes_per_sec": 0, 00:22:15.452 "w_mbytes_per_sec": 0 00:22:15.452 }, 00:22:15.452 "claimed": false, 00:22:15.452 "zoned": false, 00:22:15.452 "supported_io_types": { 00:22:15.452 "read": true, 00:22:15.452 "write": true, 00:22:15.452 "unmap": true, 00:22:15.452 "write_zeroes": true, 00:22:15.452 "flush": false, 00:22:15.452 "reset": true, 00:22:15.452 "compare": false, 00:22:15.452 "compare_and_write": false, 00:22:15.452 "abort": false, 00:22:15.452 "nvme_admin": false, 00:22:15.452 "nvme_io": false 00:22:15.452 }, 00:22:15.452 "driver_specific": { 00:22:15.452 "lvol": { 00:22:15.452 "lvol_store_uuid": "af278f2b-5926-4740-98c8-c5a364c00fd6", 00:22:15.452 "base_bdev": "nvme0n1", 00:22:15.452 "thin_provision": true, 00:22:15.452 "snapshot": false, 00:22:15.452 "clone": false, 00:22:15.452 "esnap_clone": false 00:22:15.452 } 00:22:15.452 } 00:22:15.452 } 00:22:15.452 ]' 00:22:15.452 14:17:18 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:22:15.710 14:17:18 -- common/autotest_common.sh@1372 -- # bs=4096 00:22:15.710 14:17:18 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:22:15.710 14:17:18 -- common/autotest_common.sh@1373 -- # nb=26476544 00:22:15.710 14:17:18 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:22:15.710 14:17:18 -- common/autotest_common.sh@1377 -- # echo 103424 00:22:15.710 14:17:18 -- ftl/common.sh@48 -- # cache_size=5171 00:22:15.710 14:17:18 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:15.710 14:17:18 -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:22:15.710 14:17:18 -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 98b261e4-0780-4e8d-92fa-6f114e57dbb4 00:22:15.710 14:17:18 -- common/autotest_common.sh@1367 -- # local bdev_name=98b261e4-0780-4e8d-92fa-6f114e57dbb4 00:22:15.710 14:17:18 -- common/autotest_common.sh@1368 -- # local bdev_info 00:22:15.710 14:17:18 -- common/autotest_common.sh@1369 -- # local bs 00:22:15.710 14:17:18 -- common/autotest_common.sh@1370 -- # local nb 00:22:15.710 14:17:18 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 98b261e4-0780-4e8d-92fa-6f114e57dbb4 00:22:15.969 14:17:18 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:22:15.969 { 00:22:15.969 "name": "98b261e4-0780-4e8d-92fa-6f114e57dbb4", 00:22:15.969 "aliases": [ 00:22:15.969 "lvs/nvme0n1p0" 00:22:15.969 ], 00:22:15.969 "product_name": "Logical Volume", 00:22:15.969 "block_size": 4096, 00:22:15.969 "num_blocks": 26476544, 00:22:15.969 "uuid": "98b261e4-0780-4e8d-92fa-6f114e57dbb4", 00:22:15.969 "assigned_rate_limits": { 00:22:15.969 "rw_ios_per_sec": 0, 00:22:15.969 "rw_mbytes_per_sec": 0, 00:22:15.969 "r_mbytes_per_sec": 0, 00:22:15.969 "w_mbytes_per_sec": 0 00:22:15.969 }, 00:22:15.969 "claimed": false, 00:22:15.969 "zoned": false, 00:22:15.969 "supported_io_types": { 00:22:15.969 "read": true, 00:22:15.969 "write": true, 00:22:15.969 "unmap": true, 00:22:15.969 "write_zeroes": true, 00:22:15.969 "flush": false, 00:22:15.969 "reset": true, 00:22:15.969 "compare": false, 00:22:15.969 "compare_and_write": false, 00:22:15.969 "abort": false, 00:22:15.969 "nvme_admin": false, 00:22:15.969 "nvme_io": false 00:22:15.969 }, 00:22:15.969 "driver_specific": { 00:22:15.969 "lvol": { 00:22:15.969 "lvol_store_uuid": "af278f2b-5926-4740-98c8-c5a364c00fd6", 00:22:15.969 "base_bdev": "nvme0n1", 00:22:15.969 "thin_provision": true, 00:22:15.969 "snapshot": false, 00:22:15.969 "clone": false, 00:22:15.969 "esnap_clone": false 00:22:15.969 } 00:22:15.969 } 00:22:15.969 } 00:22:15.969 ]' 00:22:15.969 14:17:18 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:22:15.969 14:17:18 -- common/autotest_common.sh@1372 -- # bs=4096 00:22:15.969 14:17:18 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:22:15.969 14:17:18 -- common/autotest_common.sh@1373 -- # nb=26476544 00:22:15.969 14:17:18 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:22:15.969 14:17:18 -- common/autotest_common.sh@1377 -- # echo 103424 00:22:15.969 14:17:18 -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:22:15.969 14:17:18 -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 98b261e4-0780-4e8d-92fa-6f114e57dbb4 --l2p_dram_limit 10' 00:22:15.969 14:17:18 -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:22:15.969 14:17:18 -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:06.0 ']' 00:22:15.969 14:17:18 -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:22:15.969 14:17:18 -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 98b261e4-0780-4e8d-92fa-6f114e57dbb4 --l2p_dram_limit 10 -c nvc0n1p0 00:22:16.229 [2024-12-08 14:17:19.038079] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.229 [2024-12-08 14:17:19.038116] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:16.229 [2024-12-08 14:17:19.038128] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:16.229 [2024-12-08 14:17:19.038136] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.229 [2024-12-08 14:17:19.038175] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.229 [2024-12-08 14:17:19.038183] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:16.229 [2024-12-08 14:17:19.038190] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:22:16.229 [2024-12-08 14:17:19.038197] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.229 [2024-12-08 14:17:19.038212] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:16.229 [2024-12-08 14:17:19.040429] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:16.229 [2024-12-08 14:17:19.040458] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.229 [2024-12-08 14:17:19.040464] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:16.229 [2024-12-08 14:17:19.040472] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.246 ms 00:22:16.229 [2024-12-08 14:17:19.040478] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.229 [2024-12-08 14:17:19.040505] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 4613a08b-5f26-4463-9b36-017e8defea61 00:22:16.229 [2024-12-08 14:17:19.041505] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.229 [2024-12-08 14:17:19.041534] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:16.229 [2024-12-08 14:17:19.041542] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:22:16.229 [2024-12-08 14:17:19.041549] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.229 [2024-12-08 14:17:19.046085] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.229 [2024-12-08 14:17:19.046112] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:16.229 [2024-12-08 14:17:19.046119] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.480 ms 00:22:16.229 [2024-12-08 14:17:19.046127] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.229 [2024-12-08 14:17:19.046191] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.229 [2024-12-08 14:17:19.046199] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:16.229 [2024-12-08 14:17:19.046206] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:22:16.229 [2024-12-08 14:17:19.046216] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.229 [2024-12-08 14:17:19.046251] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.230 [2024-12-08 14:17:19.046262] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:16.230 [2024-12-08 14:17:19.046268] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:16.230 [2024-12-08 14:17:19.046275] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.230 [2024-12-08 14:17:19.046292] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:16.230 [2024-12-08 14:17:19.049213] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.230 [2024-12-08 14:17:19.049237] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:16.230 [2024-12-08 14:17:19.049246] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.924 ms 00:22:16.230 [2024-12-08 14:17:19.049252] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.230 [2024-12-08 14:17:19.049281] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.230 [2024-12-08 14:17:19.049287] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:16.230 [2024-12-08 14:17:19.049294] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:16.230 [2024-12-08 14:17:19.049300] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.230 [2024-12-08 14:17:19.049314] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:16.230 [2024-12-08 14:17:19.049401] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:22:16.230 [2024-12-08 14:17:19.049413] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:16.230 [2024-12-08 14:17:19.049420] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:22:16.230 [2024-12-08 14:17:19.049429] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:16.230 [2024-12-08 14:17:19.049436] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:16.230 [2024-12-08 14:17:19.049445] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:16.230 [2024-12-08 14:17:19.049457] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:16.230 [2024-12-08 14:17:19.049464] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:22:16.230 [2024-12-08 14:17:19.049469] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:22:16.230 [2024-12-08 14:17:19.049476] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.230 [2024-12-08 14:17:19.049482] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:16.230 [2024-12-08 14:17:19.049489] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.163 ms 00:22:16.230 [2024-12-08 14:17:19.049495] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.230 [2024-12-08 14:17:19.049542] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.230 [2024-12-08 14:17:19.049548] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:16.230 [2024-12-08 14:17:19.049555] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:22:16.230 [2024-12-08 14:17:19.049561] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.230 [2024-12-08 14:17:19.049620] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:16.230 [2024-12-08 14:17:19.049633] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:16.230 [2024-12-08 14:17:19.049640] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:16.230 [2024-12-08 14:17:19.049647] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:16.230 [2024-12-08 14:17:19.049654] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:16.230 [2024-12-08 14:17:19.049658] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:16.230 [2024-12-08 14:17:19.049665] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:16.230 [2024-12-08 14:17:19.049670] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:16.230 [2024-12-08 14:17:19.049676] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:16.230 [2024-12-08 14:17:19.049681] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:16.230 [2024-12-08 14:17:19.049687] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:16.230 [2024-12-08 14:17:19.049692] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:16.230 [2024-12-08 14:17:19.049699] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:16.230 [2024-12-08 14:17:19.049704] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:16.230 [2024-12-08 14:17:19.049710] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:22:16.230 [2024-12-08 14:17:19.049715] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:16.230 [2024-12-08 14:17:19.049722] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:16.230 [2024-12-08 14:17:19.049727] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:22:16.230 [2024-12-08 14:17:19.049732] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:16.230 [2024-12-08 14:17:19.049738] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:22:16.230 [2024-12-08 14:17:19.049744] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:22:16.230 [2024-12-08 14:17:19.049750] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:22:16.230 [2024-12-08 14:17:19.049757] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:16.230 [2024-12-08 14:17:19.049762] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:16.230 [2024-12-08 14:17:19.049768] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:16.230 [2024-12-08 14:17:19.049773] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:16.230 [2024-12-08 14:17:19.049780] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:22:16.230 [2024-12-08 14:17:19.049784] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:16.230 [2024-12-08 14:17:19.049790] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:16.230 [2024-12-08 14:17:19.049795] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:16.230 [2024-12-08 14:17:19.049801] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:16.230 [2024-12-08 14:17:19.049806] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:16.230 [2024-12-08 14:17:19.049813] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:22:16.230 [2024-12-08 14:17:19.049818] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:16.230 [2024-12-08 14:17:19.049824] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:16.230 [2024-12-08 14:17:19.049829] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:16.230 [2024-12-08 14:17:19.049835] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:16.230 [2024-12-08 14:17:19.049839] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:16.230 [2024-12-08 14:17:19.049846] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:22:16.230 [2024-12-08 14:17:19.049851] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:16.230 [2024-12-08 14:17:19.049857] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:16.230 [2024-12-08 14:17:19.049862] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:16.230 [2024-12-08 14:17:19.049869] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:16.230 [2024-12-08 14:17:19.049874] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:16.230 [2024-12-08 14:17:19.049882] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:16.230 [2024-12-08 14:17:19.049887] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:16.230 [2024-12-08 14:17:19.049893] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:16.230 [2024-12-08 14:17:19.049898] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:16.230 [2024-12-08 14:17:19.049905] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:16.230 [2024-12-08 14:17:19.049910] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:16.230 [2024-12-08 14:17:19.049917] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:16.230 [2024-12-08 14:17:19.049924] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:16.230 [2024-12-08 14:17:19.049931] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:16.230 [2024-12-08 14:17:19.049937] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:22:16.230 [2024-12-08 14:17:19.049944] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:22:16.230 [2024-12-08 14:17:19.049950] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:22:16.230 [2024-12-08 14:17:19.049956] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:22:16.230 [2024-12-08 14:17:19.049961] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:22:16.230 [2024-12-08 14:17:19.049968] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:22:16.230 [2024-12-08 14:17:19.049973] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:22:16.230 [2024-12-08 14:17:19.049989] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:22:16.230 [2024-12-08 14:17:19.049995] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:22:16.230 [2024-12-08 14:17:19.050002] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:22:16.230 [2024-12-08 14:17:19.050007] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:22:16.230 [2024-12-08 14:17:19.050017] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:22:16.230 [2024-12-08 14:17:19.050022] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:16.230 [2024-12-08 14:17:19.050030] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:16.230 [2024-12-08 14:17:19.050036] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:16.230 [2024-12-08 14:17:19.050043] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:16.230 [2024-12-08 14:17:19.050048] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:16.231 [2024-12-08 14:17:19.050055] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:16.231 [2024-12-08 14:17:19.050060] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.231 [2024-12-08 14:17:19.050067] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:16.231 [2024-12-08 14:17:19.050073] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.479 ms 00:22:16.231 [2024-12-08 14:17:19.050079] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.231 [2024-12-08 14:17:19.061890] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.231 [2024-12-08 14:17:19.061921] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:16.231 [2024-12-08 14:17:19.061928] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.781 ms 00:22:16.231 [2024-12-08 14:17:19.061936] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.231 [2024-12-08 14:17:19.062024] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.231 [2024-12-08 14:17:19.062035] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:16.231 [2024-12-08 14:17:19.062043] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:22:16.231 [2024-12-08 14:17:19.062050] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.231 [2024-12-08 14:17:19.085671] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.231 [2024-12-08 14:17:19.085699] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:16.231 [2024-12-08 14:17:19.085707] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.588 ms 00:22:16.231 [2024-12-08 14:17:19.085715] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.231 [2024-12-08 14:17:19.085737] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.231 [2024-12-08 14:17:19.085745] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:16.231 [2024-12-08 14:17:19.085751] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:22:16.231 [2024-12-08 14:17:19.085760] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.231 [2024-12-08 14:17:19.086081] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.231 [2024-12-08 14:17:19.086095] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:16.231 [2024-12-08 14:17:19.086101] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:22:16.231 [2024-12-08 14:17:19.086108] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.231 [2024-12-08 14:17:19.086193] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.231 [2024-12-08 14:17:19.086237] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:16.231 [2024-12-08 14:17:19.086243] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:22:16.231 [2024-12-08 14:17:19.086251] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.231 [2024-12-08 14:17:19.098111] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.231 [2024-12-08 14:17:19.098137] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:16.231 [2024-12-08 14:17:19.098144] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.847 ms 00:22:16.231 [2024-12-08 14:17:19.098152] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.231 [2024-12-08 14:17:19.107003] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:16.231 [2024-12-08 14:17:19.109228] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.231 [2024-12-08 14:17:19.109251] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:16.231 [2024-12-08 14:17:19.109260] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.023 ms 00:22:16.231 [2024-12-08 14:17:19.109266] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.491 [2024-12-08 14:17:19.187861] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.491 [2024-12-08 14:17:19.187906] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:16.491 [2024-12-08 14:17:19.187921] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.571 ms 00:22:16.491 [2024-12-08 14:17:19.187929] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.491 [2024-12-08 14:17:19.187973] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:22:16.491 [2024-12-08 14:17:19.187993] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:22:19.788 [2024-12-08 14:17:22.690713] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.788 [2024-12-08 14:17:22.690803] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:19.788 [2024-12-08 14:17:22.690826] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3502.712 ms 00:22:19.788 [2024-12-08 14:17:22.690836] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:19.788 [2024-12-08 14:17:22.691081] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:19.788 [2024-12-08 14:17:22.691095] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:19.788 [2024-12-08 14:17:22.691112] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.184 ms 00:22:19.788 [2024-12-08 14:17:22.691120] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.050 [2024-12-08 14:17:22.717537] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.050 [2024-12-08 14:17:22.717593] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:20.050 [2024-12-08 14:17:22.717611] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.356 ms 00:22:20.050 [2024-12-08 14:17:22.717620] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.050 [2024-12-08 14:17:22.742986] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.050 [2024-12-08 14:17:22.743044] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:20.050 [2024-12-08 14:17:22.743072] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.300 ms 00:22:20.050 [2024-12-08 14:17:22.743083] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.050 [2024-12-08 14:17:22.743458] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.050 [2024-12-08 14:17:22.743485] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:20.050 [2024-12-08 14:17:22.743497] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:22:20.050 [2024-12-08 14:17:22.743506] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.050 [2024-12-08 14:17:22.816009] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.050 [2024-12-08 14:17:22.816059] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:20.050 [2024-12-08 14:17:22.816076] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.441 ms 00:22:20.050 [2024-12-08 14:17:22.816084] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.050 [2024-12-08 14:17:22.843570] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.050 [2024-12-08 14:17:22.843620] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:20.050 [2024-12-08 14:17:22.843635] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.429 ms 00:22:20.050 [2024-12-08 14:17:22.843643] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.050 [2024-12-08 14:17:22.845207] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.050 [2024-12-08 14:17:22.845256] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:22:20.050 [2024-12-08 14:17:22.845271] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.507 ms 00:22:20.050 [2024-12-08 14:17:22.845279] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.050 [2024-12-08 14:17:22.872776] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.050 [2024-12-08 14:17:22.872829] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:20.050 [2024-12-08 14:17:22.872846] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.441 ms 00:22:20.050 [2024-12-08 14:17:22.872853] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.050 [2024-12-08 14:17:22.872918] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.050 [2024-12-08 14:17:22.872929] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:20.050 [2024-12-08 14:17:22.872941] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:20.051 [2024-12-08 14:17:22.872948] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.051 [2024-12-08 14:17:22.873107] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.051 [2024-12-08 14:17:22.873122] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:20.051 [2024-12-08 14:17:22.873133] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:22:20.051 [2024-12-08 14:17:22.873141] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.051 [2024-12-08 14:17:22.874317] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3835.668 ms, result 0 00:22:20.051 { 00:22:20.051 "name": "ftl0", 00:22:20.051 "uuid": "4613a08b-5f26-4463-9b36-017e8defea61" 00:22:20.051 } 00:22:20.051 14:17:22 -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:22:20.051 14:17:22 -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:20.312 14:17:23 -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:22:20.312 14:17:23 -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:22:20.312 14:17:23 -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:22:20.574 /dev/nbd0 00:22:20.574 14:17:23 -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:22:20.574 14:17:23 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:22:20.574 14:17:23 -- common/autotest_common.sh@867 -- # local i 00:22:20.574 14:17:23 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:20.574 14:17:23 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:20.574 14:17:23 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:22:20.574 14:17:23 -- common/autotest_common.sh@871 -- # break 00:22:20.574 14:17:23 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:20.574 14:17:23 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:20.574 14:17:23 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:22:20.574 1+0 records in 00:22:20.574 1+0 records out 00:22:20.574 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000842754 s, 4.9 MB/s 00:22:20.574 14:17:23 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:20.574 14:17:23 -- common/autotest_common.sh@884 -- # size=4096 00:22:20.574 14:17:23 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:20.574 14:17:23 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:20.574 14:17:23 -- common/autotest_common.sh@887 -- # return 0 00:22:20.574 14:17:23 -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 -r /var/tmp/spdk_dd.sock --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:22:20.574 [2024-12-08 14:17:23.384882] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:20.574 [2024-12-08 14:17:23.385011] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75878 ] 00:22:20.833 [2024-12-08 14:17:23.537448] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:21.092 [2024-12-08 14:17:23.804797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:22.467  [2024-12-08T14:17:26.319Z] Copying: 246/1024 [MB] (246 MBps) [2024-12-08T14:17:27.304Z] Copying: 440/1024 [MB] (194 MBps) [2024-12-08T14:17:28.283Z] Copying: 679/1024 [MB] (238 MBps) [2024-12-08T14:17:28.540Z] Copying: 928/1024 [MB] (248 MBps) [2024-12-08T14:17:29.105Z] Copying: 1024/1024 [MB] (average 233 MBps) 00:22:26.185 00:22:26.185 14:17:29 -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:28.092 14:17:30 -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 -r /var/tmp/spdk_dd.sock --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:22:28.092 [2024-12-08 14:17:30.740649] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:28.092 [2024-12-08 14:17:30.740744] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75954 ] 00:22:28.092 [2024-12-08 14:17:30.883602] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.350 [2024-12-08 14:17:31.046519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:29.736  [2024-12-08T14:17:33.613Z] Copying: 19/1024 [MB] (19 MBps) [2024-12-08T14:17:34.553Z] Copying: 37/1024 [MB] (17 MBps) [2024-12-08T14:17:35.494Z] Copying: 55/1024 [MB] (17 MBps) [2024-12-08T14:17:36.431Z] Copying: 75/1024 [MB] (19 MBps) [2024-12-08T14:17:37.374Z] Copying: 102/1024 [MB] (27 MBps) [2024-12-08T14:17:38.317Z] Copying: 122/1024 [MB] (19 MBps) [2024-12-08T14:17:39.256Z] Copying: 140/1024 [MB] (18 MBps) [2024-12-08T14:17:40.638Z] Copying: 168/1024 [MB] (28 MBps) [2024-12-08T14:17:41.580Z] Copying: 193/1024 [MB] (25 MBps) [2024-12-08T14:17:42.520Z] Copying: 213/1024 [MB] (19 MBps) [2024-12-08T14:17:43.455Z] Copying: 231/1024 [MB] (17 MBps) [2024-12-08T14:17:44.383Z] Copying: 255/1024 [MB] (24 MBps) [2024-12-08T14:17:45.325Z] Copying: 292/1024 [MB] (36 MBps) [2024-12-08T14:17:46.267Z] Copying: 325/1024 [MB] (33 MBps) [2024-12-08T14:17:47.664Z] Copying: 340/1024 [MB] (15 MBps) [2024-12-08T14:17:48.608Z] Copying: 357/1024 [MB] (16 MBps) [2024-12-08T14:17:49.551Z] Copying: 374/1024 [MB] (16 MBps) [2024-12-08T14:17:50.495Z] Copying: 387/1024 [MB] (13 MBps) [2024-12-08T14:17:51.441Z] Copying: 404/1024 [MB] (17 MBps) [2024-12-08T14:17:52.375Z] Copying: 423/1024 [MB] (18 MBps) [2024-12-08T14:17:53.319Z] Copying: 451/1024 [MB] (28 MBps) [2024-12-08T14:17:54.261Z] Copying: 466/1024 [MB] (15 MBps) [2024-12-08T14:17:55.652Z] Copying: 481/1024 [MB] (14 MBps) [2024-12-08T14:17:56.591Z] Copying: 494/1024 [MB] (13 MBps) [2024-12-08T14:17:57.534Z] Copying: 510/1024 [MB] (16 MBps) [2024-12-08T14:17:58.475Z] Copying: 528/1024 [MB] (17 MBps) [2024-12-08T14:17:59.416Z] Copying: 552/1024 [MB] (23 MBps) [2024-12-08T14:18:00.448Z] Copying: 570/1024 [MB] (18 MBps) [2024-12-08T14:18:01.381Z] Copying: 601/1024 [MB] (31 MBps) [2024-12-08T14:18:02.317Z] Copying: 631/1024 [MB] (30 MBps) [2024-12-08T14:18:03.255Z] Copying: 651/1024 [MB] (19 MBps) [2024-12-08T14:18:04.638Z] Copying: 669/1024 [MB] (18 MBps) [2024-12-08T14:18:05.586Z] Copying: 687/1024 [MB] (17 MBps) [2024-12-08T14:18:06.526Z] Copying: 704/1024 [MB] (17 MBps) [2024-12-08T14:18:07.468Z] Copying: 718/1024 [MB] (14 MBps) [2024-12-08T14:18:08.411Z] Copying: 738/1024 [MB] (19 MBps) [2024-12-08T14:18:09.353Z] Copying: 755/1024 [MB] (17 MBps) [2024-12-08T14:18:10.303Z] Copying: 774/1024 [MB] (18 MBps) [2024-12-08T14:18:11.243Z] Copying: 793/1024 [MB] (19 MBps) [2024-12-08T14:18:12.625Z] Copying: 813/1024 [MB] (19 MBps) [2024-12-08T14:18:13.566Z] Copying: 833/1024 [MB] (20 MBps) [2024-12-08T14:18:14.508Z] Copying: 853/1024 [MB] (19 MBps) [2024-12-08T14:18:15.450Z] Copying: 870/1024 [MB] (17 MBps) [2024-12-08T14:18:16.392Z] Copying: 888/1024 [MB] (18 MBps) [2024-12-08T14:18:17.333Z] Copying: 904/1024 [MB] (15 MBps) [2024-12-08T14:18:18.278Z] Copying: 927/1024 [MB] (23 MBps) [2024-12-08T14:18:19.661Z] Copying: 941/1024 [MB] (13 MBps) [2024-12-08T14:18:20.600Z] Copying: 957/1024 [MB] (15 MBps) [2024-12-08T14:18:21.536Z] Copying: 971/1024 [MB] (14 MBps) [2024-12-08T14:18:22.478Z] Copying: 991/1024 [MB] (19 MBps) [2024-12-08T14:18:23.050Z] Copying: 1010/1024 [MB] (19 MBps) [2024-12-08T14:18:23.623Z] Copying: 1024/1024 [MB] (average 19 MBps) 00:23:20.703 00:23:20.703 14:18:23 -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:23:20.703 14:18:23 -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:23:20.963 14:18:23 -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:21.224 [2024-12-08 14:18:23.949878] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.224 [2024-12-08 14:18:23.949920] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:21.224 [2024-12-08 14:18:23.949933] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:21.224 [2024-12-08 14:18:23.949940] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.224 [2024-12-08 14:18:23.949957] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:21.224 [2024-12-08 14:18:23.951771] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.224 [2024-12-08 14:18:23.951796] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:21.224 [2024-12-08 14:18:23.951805] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.800 ms 00:23:21.224 [2024-12-08 14:18:23.951811] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.224 [2024-12-08 14:18:23.953583] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.224 [2024-12-08 14:18:23.953610] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:21.224 [2024-12-08 14:18:23.953623] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.750 ms 00:23:21.224 [2024-12-08 14:18:23.953629] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.224 [2024-12-08 14:18:23.965459] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.224 [2024-12-08 14:18:23.965485] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:21.224 [2024-12-08 14:18:23.965495] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.813 ms 00:23:21.224 [2024-12-08 14:18:23.965501] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.224 [2024-12-08 14:18:23.970284] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.224 [2024-12-08 14:18:23.970310] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:23:21.224 [2024-12-08 14:18:23.970319] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.755 ms 00:23:21.224 [2024-12-08 14:18:23.970327] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.224 [2024-12-08 14:18:23.988724] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.224 [2024-12-08 14:18:23.988752] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:21.224 [2024-12-08 14:18:23.988762] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.339 ms 00:23:21.224 [2024-12-08 14:18:23.988768] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.224 [2024-12-08 14:18:24.001283] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.224 [2024-12-08 14:18:24.001309] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:21.224 [2024-12-08 14:18:24.001321] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.484 ms 00:23:21.224 [2024-12-08 14:18:24.001328] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.224 [2024-12-08 14:18:24.001430] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.224 [2024-12-08 14:18:24.001437] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:21.224 [2024-12-08 14:18:24.001445] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:23:21.224 [2024-12-08 14:18:24.001451] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.224 [2024-12-08 14:18:24.019714] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.224 [2024-12-08 14:18:24.019738] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:21.224 [2024-12-08 14:18:24.019747] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.245 ms 00:23:21.224 [2024-12-08 14:18:24.019753] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.224 [2024-12-08 14:18:24.037462] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.224 [2024-12-08 14:18:24.037488] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:21.224 [2024-12-08 14:18:24.037497] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.680 ms 00:23:21.224 [2024-12-08 14:18:24.037503] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.224 [2024-12-08 14:18:24.054706] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.224 [2024-12-08 14:18:24.054731] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:21.224 [2024-12-08 14:18:24.054740] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.174 ms 00:23:21.224 [2024-12-08 14:18:24.054745] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.224 [2024-12-08 14:18:24.071732] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.224 [2024-12-08 14:18:24.071756] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:21.224 [2024-12-08 14:18:24.071765] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.931 ms 00:23:21.224 [2024-12-08 14:18:24.071770] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.224 [2024-12-08 14:18:24.071799] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:21.224 [2024-12-08 14:18:24.071810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:21.224 [2024-12-08 14:18:24.071819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:21.224 [2024-12-08 14:18:24.071825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:21.224 [2024-12-08 14:18:24.071832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:21.224 [2024-12-08 14:18:24.071837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:21.224 [2024-12-08 14:18:24.071844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:21.224 [2024-12-08 14:18:24.071850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:21.224 [2024-12-08 14:18:24.071857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:21.224 [2024-12-08 14:18:24.071863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:21.224 [2024-12-08 14:18:24.071870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:21.224 [2024-12-08 14:18:24.071875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:21.224 [2024-12-08 14:18:24.071882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:21.224 [2024-12-08 14:18:24.071887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:21.224 [2024-12-08 14:18:24.071894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.071900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.071908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.071913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.071920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.071925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.071933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.071939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.071959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.071965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.071972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.071977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.071991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.071997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:21.225 [2024-12-08 14:18:24.072476] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:21.225 [2024-12-08 14:18:24.072483] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4613a08b-5f26-4463-9b36-017e8defea61 00:23:21.225 [2024-12-08 14:18:24.072490] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:21.225 [2024-12-08 14:18:24.072497] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:21.225 [2024-12-08 14:18:24.072502] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:21.226 [2024-12-08 14:18:24.072509] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:21.226 [2024-12-08 14:18:24.072514] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:21.226 [2024-12-08 14:18:24.072521] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:21.226 [2024-12-08 14:18:24.072526] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:21.226 [2024-12-08 14:18:24.072532] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:21.226 [2024-12-08 14:18:24.072537] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:21.226 [2024-12-08 14:18:24.072544] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.226 [2024-12-08 14:18:24.072550] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:21.226 [2024-12-08 14:18:24.072557] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.747 ms 00:23:21.226 [2024-12-08 14:18:24.072562] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.226 [2024-12-08 14:18:24.081979] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.226 [2024-12-08 14:18:24.082009] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:21.226 [2024-12-08 14:18:24.082017] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.392 ms 00:23:21.226 [2024-12-08 14:18:24.082022] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.226 [2024-12-08 14:18:24.082169] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.226 [2024-12-08 14:18:24.082182] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:21.226 [2024-12-08 14:18:24.082189] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:23:21.226 [2024-12-08 14:18:24.082194] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.226 [2024-12-08 14:18:24.116632] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.226 [2024-12-08 14:18:24.116660] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:21.226 [2024-12-08 14:18:24.116669] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.226 [2024-12-08 14:18:24.116675] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.226 [2024-12-08 14:18:24.116723] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.226 [2024-12-08 14:18:24.116729] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:21.226 [2024-12-08 14:18:24.116736] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.226 [2024-12-08 14:18:24.116742] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.226 [2024-12-08 14:18:24.116797] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.226 [2024-12-08 14:18:24.116805] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:21.226 [2024-12-08 14:18:24.116812] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.226 [2024-12-08 14:18:24.116817] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.226 [2024-12-08 14:18:24.116832] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.226 [2024-12-08 14:18:24.116839] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:21.226 [2024-12-08 14:18:24.116846] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.226 [2024-12-08 14:18:24.116852] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.485 [2024-12-08 14:18:24.174847] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.485 [2024-12-08 14:18:24.174880] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:21.485 [2024-12-08 14:18:24.174890] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.485 [2024-12-08 14:18:24.174896] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.485 [2024-12-08 14:18:24.197517] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.485 [2024-12-08 14:18:24.197544] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:21.485 [2024-12-08 14:18:24.197553] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.485 [2024-12-08 14:18:24.197559] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.485 [2024-12-08 14:18:24.197613] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.485 [2024-12-08 14:18:24.197620] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:21.485 [2024-12-08 14:18:24.197627] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.485 [2024-12-08 14:18:24.197633] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.485 [2024-12-08 14:18:24.197667] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.485 [2024-12-08 14:18:24.197674] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:21.485 [2024-12-08 14:18:24.197681] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.485 [2024-12-08 14:18:24.197686] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.485 [2024-12-08 14:18:24.197757] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.485 [2024-12-08 14:18:24.197766] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:21.485 [2024-12-08 14:18:24.197773] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.485 [2024-12-08 14:18:24.197779] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.485 [2024-12-08 14:18:24.197803] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.485 [2024-12-08 14:18:24.197810] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:21.485 [2024-12-08 14:18:24.197817] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.485 [2024-12-08 14:18:24.197823] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.485 [2024-12-08 14:18:24.197852] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.485 [2024-12-08 14:18:24.197859] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:21.485 [2024-12-08 14:18:24.197866] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.485 [2024-12-08 14:18:24.197872] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.485 [2024-12-08 14:18:24.197908] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.485 [2024-12-08 14:18:24.197914] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:21.485 [2024-12-08 14:18:24.197921] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.485 [2024-12-08 14:18:24.197927] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.485 [2024-12-08 14:18:24.198045] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 248.118 ms, result 0 00:23:21.485 true 00:23:21.485 14:18:24 -- ftl/dirty_shutdown.sh@83 -- # kill -9 75723 00:23:21.485 14:18:24 -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid75723 00:23:21.485 14:18:24 -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:23:21.485 [2024-12-08 14:18:24.271430] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:21.485 [2024-12-08 14:18:24.271514] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76522 ] 00:23:21.744 [2024-12-08 14:18:24.412224] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.744 [2024-12-08 14:18:24.549564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:23.129  [2024-12-08T14:18:26.986Z] Copying: 209/1024 [MB] (209 MBps) [2024-12-08T14:18:27.922Z] Copying: 412/1024 [MB] (202 MBps) [2024-12-08T14:18:28.858Z] Copying: 672/1024 [MB] (260 MBps) [2024-12-08T14:18:29.117Z] Copying: 931/1024 [MB] (258 MBps) [2024-12-08T14:18:30.053Z] Copying: 1024/1024 [MB] (average 234 MBps) 00:23:27.133 00:23:27.133 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 75723 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:23:27.133 14:18:29 -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:27.133 [2024-12-08 14:18:29.776118] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:27.133 [2024-12-08 14:18:29.776664] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76579 ] 00:23:27.133 [2024-12-08 14:18:29.923607] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.402 [2024-12-08 14:18:30.064220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:27.402 [2024-12-08 14:18:30.269388] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:27.402 [2024-12-08 14:18:30.269439] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:27.733 [2024-12-08 14:18:30.328952] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:23:27.733 [2024-12-08 14:18:30.329342] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:23:27.733 [2024-12-08 14:18:30.329608] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:23:27.733 [2024-12-08 14:18:30.563528] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.733 [2024-12-08 14:18:30.563563] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:27.733 [2024-12-08 14:18:30.563573] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:27.733 [2024-12-08 14:18:30.563578] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.733 [2024-12-08 14:18:30.563616] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.733 [2024-12-08 14:18:30.563624] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:27.733 [2024-12-08 14:18:30.563631] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:23:27.733 [2024-12-08 14:18:30.563637] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.733 [2024-12-08 14:18:30.563649] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:27.733 [2024-12-08 14:18:30.564210] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:27.733 [2024-12-08 14:18:30.564230] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.733 [2024-12-08 14:18:30.564236] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:27.733 [2024-12-08 14:18:30.564242] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.584 ms 00:23:27.733 [2024-12-08 14:18:30.564247] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.733 [2024-12-08 14:18:30.565202] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:27.733 [2024-12-08 14:18:30.574838] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.733 [2024-12-08 14:18:30.574868] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:27.733 [2024-12-08 14:18:30.574876] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.638 ms 00:23:27.733 [2024-12-08 14:18:30.574882] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.733 [2024-12-08 14:18:30.574923] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.733 [2024-12-08 14:18:30.574932] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:27.733 [2024-12-08 14:18:30.574938] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:23:27.733 [2024-12-08 14:18:30.574946] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.733 [2024-12-08 14:18:30.579328] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.733 [2024-12-08 14:18:30.579352] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:27.733 [2024-12-08 14:18:30.579359] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.337 ms 00:23:27.733 [2024-12-08 14:18:30.579365] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.733 [2024-12-08 14:18:30.579427] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.733 [2024-12-08 14:18:30.579433] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:27.733 [2024-12-08 14:18:30.579439] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:23:27.733 [2024-12-08 14:18:30.579445] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.733 [2024-12-08 14:18:30.579479] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.733 [2024-12-08 14:18:30.579487] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:27.733 [2024-12-08 14:18:30.579493] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:27.733 [2024-12-08 14:18:30.579498] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.733 [2024-12-08 14:18:30.579515] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:27.733 [2024-12-08 14:18:30.582260] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.733 [2024-12-08 14:18:30.582285] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:27.733 [2024-12-08 14:18:30.582292] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.751 ms 00:23:27.733 [2024-12-08 14:18:30.582297] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.733 [2024-12-08 14:18:30.582327] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.733 [2024-12-08 14:18:30.582333] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:27.733 [2024-12-08 14:18:30.582339] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:27.733 [2024-12-08 14:18:30.582345] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.733 [2024-12-08 14:18:30.582358] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:27.733 [2024-12-08 14:18:30.582372] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:23:27.733 [2024-12-08 14:18:30.582396] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:27.733 [2024-12-08 14:18:30.582408] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:23:27.733 [2024-12-08 14:18:30.582463] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:23:27.733 [2024-12-08 14:18:30.582471] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:27.733 [2024-12-08 14:18:30.582478] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:23:27.733 [2024-12-08 14:18:30.582485] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:27.733 [2024-12-08 14:18:30.582492] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:27.733 [2024-12-08 14:18:30.582498] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:27.733 [2024-12-08 14:18:30.582503] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:27.733 [2024-12-08 14:18:30.582508] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:23:27.733 [2024-12-08 14:18:30.582513] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:23:27.733 [2024-12-08 14:18:30.582521] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.733 [2024-12-08 14:18:30.582526] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:27.733 [2024-12-08 14:18:30.582532] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.165 ms 00:23:27.733 [2024-12-08 14:18:30.582537] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.733 [2024-12-08 14:18:30.582582] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.733 [2024-12-08 14:18:30.582588] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:27.733 [2024-12-08 14:18:30.582593] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:23:27.733 [2024-12-08 14:18:30.582598] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.733 [2024-12-08 14:18:30.582650] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:27.733 [2024-12-08 14:18:30.582657] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:27.733 [2024-12-08 14:18:30.582665] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:27.733 [2024-12-08 14:18:30.582670] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:27.733 [2024-12-08 14:18:30.582676] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:27.733 [2024-12-08 14:18:30.582680] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:27.733 [2024-12-08 14:18:30.582686] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:27.733 [2024-12-08 14:18:30.582692] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:27.733 [2024-12-08 14:18:30.582697] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:27.733 [2024-12-08 14:18:30.582702] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:27.733 [2024-12-08 14:18:30.582707] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:27.733 [2024-12-08 14:18:30.582711] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:27.733 [2024-12-08 14:18:30.582721] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:27.733 [2024-12-08 14:18:30.582726] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:27.733 [2024-12-08 14:18:30.582731] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:23:27.733 [2024-12-08 14:18:30.582735] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:27.733 [2024-12-08 14:18:30.582740] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:27.733 [2024-12-08 14:18:30.582745] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:23:27.733 [2024-12-08 14:18:30.582750] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:27.734 [2024-12-08 14:18:30.582755] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:23:27.734 [2024-12-08 14:18:30.582760] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:23:27.734 [2024-12-08 14:18:30.582765] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:23:27.734 [2024-12-08 14:18:30.582770] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:27.734 [2024-12-08 14:18:30.582775] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:27.734 [2024-12-08 14:18:30.582779] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:27.734 [2024-12-08 14:18:30.582784] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:27.734 [2024-12-08 14:18:30.582789] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:23:27.734 [2024-12-08 14:18:30.582793] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:27.734 [2024-12-08 14:18:30.582798] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:27.734 [2024-12-08 14:18:30.582803] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:27.734 [2024-12-08 14:18:30.582807] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:27.734 [2024-12-08 14:18:30.582812] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:27.734 [2024-12-08 14:18:30.582817] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:23:27.734 [2024-12-08 14:18:30.582821] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:27.734 [2024-12-08 14:18:30.582827] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:27.734 [2024-12-08 14:18:30.582832] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:27.734 [2024-12-08 14:18:30.582836] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:27.734 [2024-12-08 14:18:30.582841] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:27.734 [2024-12-08 14:18:30.582846] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:23:27.734 [2024-12-08 14:18:30.582851] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:27.734 [2024-12-08 14:18:30.582856] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:27.734 [2024-12-08 14:18:30.582861] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:27.734 [2024-12-08 14:18:30.582866] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:27.734 [2024-12-08 14:18:30.582872] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:27.734 [2024-12-08 14:18:30.582877] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:27.734 [2024-12-08 14:18:30.582882] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:27.734 [2024-12-08 14:18:30.582887] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:27.734 [2024-12-08 14:18:30.582892] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:27.734 [2024-12-08 14:18:30.582897] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:27.734 [2024-12-08 14:18:30.582902] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:27.734 [2024-12-08 14:18:30.582907] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:27.734 [2024-12-08 14:18:30.582914] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:27.734 [2024-12-08 14:18:30.582923] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:27.734 [2024-12-08 14:18:30.582928] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:23:27.734 [2024-12-08 14:18:30.582933] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:23:27.734 [2024-12-08 14:18:30.582938] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:23:27.734 [2024-12-08 14:18:30.582944] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:23:27.734 [2024-12-08 14:18:30.582949] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:23:27.734 [2024-12-08 14:18:30.582954] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:23:27.734 [2024-12-08 14:18:30.582959] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:23:27.734 [2024-12-08 14:18:30.582964] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:23:27.734 [2024-12-08 14:18:30.582970] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:23:27.734 [2024-12-08 14:18:30.582975] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:23:27.734 [2024-12-08 14:18:30.582989] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:23:27.734 [2024-12-08 14:18:30.582995] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:23:27.734 [2024-12-08 14:18:30.583000] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:27.734 [2024-12-08 14:18:30.583006] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:27.734 [2024-12-08 14:18:30.583014] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:27.734 [2024-12-08 14:18:30.583020] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:27.734 [2024-12-08 14:18:30.583026] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:27.734 [2024-12-08 14:18:30.583032] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:27.734 [2024-12-08 14:18:30.583038] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.734 [2024-12-08 14:18:30.583043] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:27.734 [2024-12-08 14:18:30.583049] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:23:27.734 [2024-12-08 14:18:30.583054] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.734 [2024-12-08 14:18:30.594827] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.734 [2024-12-08 14:18:30.594856] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:27.734 [2024-12-08 14:18:30.594864] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.746 ms 00:23:27.734 [2024-12-08 14:18:30.594870] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.734 [2024-12-08 14:18:30.594936] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.734 [2024-12-08 14:18:30.594942] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:27.734 [2024-12-08 14:18:30.594948] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:23:27.734 [2024-12-08 14:18:30.594954] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.734 [2024-12-08 14:18:30.637828] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.734 [2024-12-08 14:18:30.637863] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:27.734 [2024-12-08 14:18:30.637874] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.830 ms 00:23:27.734 [2024-12-08 14:18:30.637882] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.734 [2024-12-08 14:18:30.637922] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.734 [2024-12-08 14:18:30.637930] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:27.734 [2024-12-08 14:18:30.637937] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:23:27.734 [2024-12-08 14:18:30.637946] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.734 [2024-12-08 14:18:30.638276] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.734 [2024-12-08 14:18:30.638297] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:27.734 [2024-12-08 14:18:30.638304] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:23:27.734 [2024-12-08 14:18:30.638310] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.734 [2024-12-08 14:18:30.638400] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.734 [2024-12-08 14:18:30.638413] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:27.734 [2024-12-08 14:18:30.638419] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:23:27.734 [2024-12-08 14:18:30.638425] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.734 [2024-12-08 14:18:30.649366] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.734 [2024-12-08 14:18:30.649392] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:27.734 [2024-12-08 14:18:30.649399] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.926 ms 00:23:27.734 [2024-12-08 14:18:30.649405] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.007 [2024-12-08 14:18:30.659245] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:28.007 [2024-12-08 14:18:30.659276] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:28.007 [2024-12-08 14:18:30.659284] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.007 [2024-12-08 14:18:30.659290] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:28.007 [2024-12-08 14:18:30.659296] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.805 ms 00:23:28.007 [2024-12-08 14:18:30.659302] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.007 [2024-12-08 14:18:30.677820] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.007 [2024-12-08 14:18:30.677848] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:28.007 [2024-12-08 14:18:30.677860] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.487 ms 00:23:28.007 [2024-12-08 14:18:30.677867] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.007 [2024-12-08 14:18:30.686941] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.007 [2024-12-08 14:18:30.686967] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:28.007 [2024-12-08 14:18:30.686974] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.044 ms 00:23:28.007 [2024-12-08 14:18:30.686994] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.007 [2024-12-08 14:18:30.695827] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.007 [2024-12-08 14:18:30.695853] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:28.007 [2024-12-08 14:18:30.695860] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.806 ms 00:23:28.007 [2024-12-08 14:18:30.695866] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.007 [2024-12-08 14:18:30.696142] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.007 [2024-12-08 14:18:30.696157] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:28.007 [2024-12-08 14:18:30.696164] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.222 ms 00:23:28.007 [2024-12-08 14:18:30.696170] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.007 [2024-12-08 14:18:30.741607] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.007 [2024-12-08 14:18:30.741652] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:28.007 [2024-12-08 14:18:30.741663] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.421 ms 00:23:28.007 [2024-12-08 14:18:30.741670] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.008 [2024-12-08 14:18:30.749788] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:28.008 [2024-12-08 14:18:30.751758] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.008 [2024-12-08 14:18:30.751784] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:28.008 [2024-12-08 14:18:30.751792] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.034 ms 00:23:28.008 [2024-12-08 14:18:30.751798] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.008 [2024-12-08 14:18:30.751858] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.008 [2024-12-08 14:18:30.751866] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:28.008 [2024-12-08 14:18:30.751874] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:28.008 [2024-12-08 14:18:30.751880] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.008 [2024-12-08 14:18:30.751933] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.008 [2024-12-08 14:18:30.751951] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:28.008 [2024-12-08 14:18:30.751958] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:23:28.008 [2024-12-08 14:18:30.751965] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.008 [2024-12-08 14:18:30.752910] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.008 [2024-12-08 14:18:30.752938] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:23:28.008 [2024-12-08 14:18:30.752945] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.931 ms 00:23:28.008 [2024-12-08 14:18:30.752954] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.008 [2024-12-08 14:18:30.752978] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.008 [2024-12-08 14:18:30.752998] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:28.008 [2024-12-08 14:18:30.753007] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:28.008 [2024-12-08 14:18:30.753013] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.008 [2024-12-08 14:18:30.753039] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:28.008 [2024-12-08 14:18:30.753046] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.008 [2024-12-08 14:18:30.753052] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:28.008 [2024-12-08 14:18:30.753057] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:28.008 [2024-12-08 14:18:30.753063] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.008 [2024-12-08 14:18:30.771303] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.008 [2024-12-08 14:18:30.771336] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:28.008 [2024-12-08 14:18:30.771344] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.219 ms 00:23:28.008 [2024-12-08 14:18:30.771350] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.008 [2024-12-08 14:18:30.771403] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.008 [2024-12-08 14:18:30.771411] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:28.008 [2024-12-08 14:18:30.771417] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:23:28.008 [2024-12-08 14:18:30.771423] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.008 [2024-12-08 14:18:30.772157] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 208.285 ms, result 0 00:23:28.942  [2024-12-08T14:18:32.803Z] Copying: 44/1024 [MB] (44 MBps) [2024-12-08T14:18:34.193Z] Copying: 76/1024 [MB] (32 MBps) [2024-12-08T14:18:35.139Z] Copying: 87/1024 [MB] (10 MBps) [2024-12-08T14:18:36.082Z] Copying: 99/1024 [MB] (12 MBps) [2024-12-08T14:18:37.024Z] Copying: 114/1024 [MB] (14 MBps) [2024-12-08T14:18:37.964Z] Copying: 127/1024 [MB] (13 MBps) [2024-12-08T14:18:38.906Z] Copying: 138/1024 [MB] (10 MBps) [2024-12-08T14:18:39.848Z] Copying: 148/1024 [MB] (10 MBps) [2024-12-08T14:18:40.794Z] Copying: 162/1024 [MB] (14 MBps) [2024-12-08T14:18:42.176Z] Copying: 176/1024 [MB] (13 MBps) [2024-12-08T14:18:43.112Z] Copying: 195/1024 [MB] (19 MBps) [2024-12-08T14:18:44.055Z] Copying: 242/1024 [MB] (46 MBps) [2024-12-08T14:18:44.997Z] Copying: 260/1024 [MB] (17 MBps) [2024-12-08T14:18:45.954Z] Copying: 273/1024 [MB] (13 MBps) [2024-12-08T14:18:46.898Z] Copying: 293/1024 [MB] (20 MBps) [2024-12-08T14:18:47.843Z] Copying: 306/1024 [MB] (12 MBps) [2024-12-08T14:18:48.787Z] Copying: 316/1024 [MB] (10 MBps) [2024-12-08T14:18:50.174Z] Copying: 334/1024 [MB] (18 MBps) [2024-12-08T14:18:51.116Z] Copying: 348/1024 [MB] (13 MBps) [2024-12-08T14:18:52.061Z] Copying: 358/1024 [MB] (10 MBps) [2024-12-08T14:18:53.008Z] Copying: 368/1024 [MB] (10 MBps) [2024-12-08T14:18:53.955Z] Copying: 379/1024 [MB] (10 MBps) [2024-12-08T14:18:54.901Z] Copying: 389/1024 [MB] (10 MBps) [2024-12-08T14:18:55.842Z] Copying: 399/1024 [MB] (10 MBps) [2024-12-08T14:18:57.227Z] Copying: 410/1024 [MB] (10 MBps) [2024-12-08T14:18:57.815Z] Copying: 420/1024 [MB] (10 MBps) [2024-12-08T14:18:59.199Z] Copying: 430/1024 [MB] (10 MBps) [2024-12-08T14:19:00.132Z] Copying: 440/1024 [MB] (10 MBps) [2024-12-08T14:19:01.078Z] Copying: 468/1024 [MB] (28 MBps) [2024-12-08T14:19:02.021Z] Copying: 490/1024 [MB] (21 MBps) [2024-12-08T14:19:03.009Z] Copying: 505/1024 [MB] (15 MBps) [2024-12-08T14:19:03.952Z] Copying: 519/1024 [MB] (13 MBps) [2024-12-08T14:19:04.898Z] Copying: 533/1024 [MB] (13 MBps) [2024-12-08T14:19:05.840Z] Copying: 546/1024 [MB] (13 MBps) [2024-12-08T14:19:07.227Z] Copying: 557/1024 [MB] (10 MBps) [2024-12-08T14:19:07.797Z] Copying: 570/1024 [MB] (13 MBps) [2024-12-08T14:19:09.183Z] Copying: 585/1024 [MB] (15 MBps) [2024-12-08T14:19:10.128Z] Copying: 597/1024 [MB] (11 MBps) [2024-12-08T14:19:11.071Z] Copying: 612/1024 [MB] (15 MBps) [2024-12-08T14:19:12.014Z] Copying: 637/1024 [MB] (24 MBps) [2024-12-08T14:19:12.961Z] Copying: 651/1024 [MB] (13 MBps) [2024-12-08T14:19:13.907Z] Copying: 663/1024 [MB] (12 MBps) [2024-12-08T14:19:14.849Z] Copying: 680/1024 [MB] (16 MBps) [2024-12-08T14:19:15.787Z] Copying: 699/1024 [MB] (19 MBps) [2024-12-08T14:19:17.165Z] Copying: 718/1024 [MB] (18 MBps) [2024-12-08T14:19:18.109Z] Copying: 735/1024 [MB] (16 MBps) [2024-12-08T14:19:19.051Z] Copying: 754/1024 [MB] (19 MBps) [2024-12-08T14:19:19.990Z] Copying: 771/1024 [MB] (16 MBps) [2024-12-08T14:19:20.933Z] Copying: 790/1024 [MB] (18 MBps) [2024-12-08T14:19:21.874Z] Copying: 806/1024 [MB] (16 MBps) [2024-12-08T14:19:22.818Z] Copying: 817/1024 [MB] (11 MBps) [2024-12-08T14:19:24.207Z] Copying: 831/1024 [MB] (14 MBps) [2024-12-08T14:19:25.154Z] Copying: 846/1024 [MB] (14 MBps) [2024-12-08T14:19:26.099Z] Copying: 860/1024 [MB] (13 MBps) [2024-12-08T14:19:27.043Z] Copying: 871/1024 [MB] (11 MBps) [2024-12-08T14:19:27.987Z] Copying: 885/1024 [MB] (13 MBps) [2024-12-08T14:19:28.933Z] Copying: 895/1024 [MB] (10 MBps) [2024-12-08T14:19:29.880Z] Copying: 905/1024 [MB] (10 MBps) [2024-12-08T14:19:30.823Z] Copying: 915/1024 [MB] (10 MBps) [2024-12-08T14:19:31.795Z] Copying: 925/1024 [MB] (10 MBps) [2024-12-08T14:19:33.176Z] Copying: 936/1024 [MB] (10 MBps) [2024-12-08T14:19:34.165Z] Copying: 968808/1048576 [kB] (10216 kBps) [2024-12-08T14:19:35.151Z] Copying: 979040/1048576 [kB] (10232 kBps) [2024-12-08T14:19:36.103Z] Copying: 966/1024 [MB] (10 MBps) [2024-12-08T14:19:37.050Z] Copying: 976/1024 [MB] (10 MBps) [2024-12-08T14:19:37.999Z] Copying: 986/1024 [MB] (10 MBps) [2024-12-08T14:19:38.943Z] Copying: 996/1024 [MB] (10 MBps) [2024-12-08T14:19:39.889Z] Copying: 1007/1024 [MB] (10 MBps) [2024-12-08T14:19:40.834Z] Copying: 1017/1024 [MB] (10 MBps) [2024-12-08T14:19:41.408Z] Copying: 1048072/1048576 [kB] (6396 kBps) [2024-12-08T14:19:41.408Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-12-08 14:19:41.216758] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.488 [2024-12-08 14:19:41.216842] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:38.488 [2024-12-08 14:19:41.216859] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:38.488 [2024-12-08 14:19:41.216868] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.488 [2024-12-08 14:19:41.217706] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:38.488 [2024-12-08 14:19:41.222531] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.488 [2024-12-08 14:19:41.222576] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:38.488 [2024-12-08 14:19:41.222596] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.773 ms 00:24:38.488 [2024-12-08 14:19:41.222605] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.488 [2024-12-08 14:19:41.236300] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.488 [2024-12-08 14:19:41.236347] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:38.488 [2024-12-08 14:19:41.236361] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.385 ms 00:24:38.488 [2024-12-08 14:19:41.236369] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.488 [2024-12-08 14:19:41.260235] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.488 [2024-12-08 14:19:41.260281] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:38.488 [2024-12-08 14:19:41.260293] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.849 ms 00:24:38.488 [2024-12-08 14:19:41.260301] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.488 [2024-12-08 14:19:41.266457] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.488 [2024-12-08 14:19:41.266495] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:24:38.488 [2024-12-08 14:19:41.266508] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.108 ms 00:24:38.488 [2024-12-08 14:19:41.266515] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.488 [2024-12-08 14:19:41.293603] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.488 [2024-12-08 14:19:41.293807] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:38.488 [2024-12-08 14:19:41.293829] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.035 ms 00:24:38.488 [2024-12-08 14:19:41.293837] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.488 [2024-12-08 14:19:41.310170] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.488 [2024-12-08 14:19:41.310216] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:38.488 [2024-12-08 14:19:41.310229] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.223 ms 00:24:38.488 [2024-12-08 14:19:41.310237] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.751 [2024-12-08 14:19:41.568940] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.751 [2024-12-08 14:19:41.569032] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:38.751 [2024-12-08 14:19:41.569048] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 258.651 ms 00:24:38.751 [2024-12-08 14:19:41.569068] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.751 [2024-12-08 14:19:41.595317] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.751 [2024-12-08 14:19:41.595365] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:38.751 [2024-12-08 14:19:41.595378] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.226 ms 00:24:38.751 [2024-12-08 14:19:41.595386] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.751 [2024-12-08 14:19:41.621047] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.751 [2024-12-08 14:19:41.621101] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:38.751 [2024-12-08 14:19:41.621112] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.616 ms 00:24:38.751 [2024-12-08 14:19:41.621120] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.751 [2024-12-08 14:19:41.646126] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.751 [2024-12-08 14:19:41.646168] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:38.751 [2024-12-08 14:19:41.646180] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.963 ms 00:24:38.751 [2024-12-08 14:19:41.646187] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.014 [2024-12-08 14:19:41.671176] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.014 [2024-12-08 14:19:41.671368] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:39.014 [2024-12-08 14:19:41.671389] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.904 ms 00:24:39.014 [2024-12-08 14:19:41.671396] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.014 [2024-12-08 14:19:41.671480] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:39.014 [2024-12-08 14:19:41.671497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 92160 / 261120 wr_cnt: 1 state: open 00:24:39.014 [2024-12-08 14:19:41.671508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:39.014 [2024-12-08 14:19:41.671516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:39.014 [2024-12-08 14:19:41.671525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:39.014 [2024-12-08 14:19:41.671534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:39.014 [2024-12-08 14:19:41.671541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:39.014 [2024-12-08 14:19:41.671549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:39.014 [2024-12-08 14:19:41.671557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:39.014 [2024-12-08 14:19:41.671565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:39.014 [2024-12-08 14:19:41.671572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:39.014 [2024-12-08 14:19:41.671580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:39.014 [2024-12-08 14:19:41.671587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:39.014 [2024-12-08 14:19:41.671595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:39.014 [2024-12-08 14:19:41.671603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:39.014 [2024-12-08 14:19:41.671611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:39.014 [2024-12-08 14:19:41.671618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:39.014 [2024-12-08 14:19:41.671626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:39.014 [2024-12-08 14:19:41.671633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:39.014 [2024-12-08 14:19:41.671641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:39.014 [2024-12-08 14:19:41.671648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:39.014 [2024-12-08 14:19:41.671656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:39.014 [2024-12-08 14:19:41.671664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:39.014 [2024-12-08 14:19:41.671671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:39.014 [2024-12-08 14:19:41.671679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:39.014 [2024-12-08 14:19:41.671687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:39.014 [2024-12-08 14:19:41.671694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:39.014 [2024-12-08 14:19:41.671701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:39.014 [2024-12-08 14:19:41.671725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:39.014 [2024-12-08 14:19:41.671734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:39.014 [2024-12-08 14:19:41.671742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:39.014 [2024-12-08 14:19:41.671752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:39.014 [2024-12-08 14:19:41.671761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:39.014 [2024-12-08 14:19:41.671769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:39.014 [2024-12-08 14:19:41.671778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:39.014 [2024-12-08 14:19:41.671786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:39.014 [2024-12-08 14:19:41.671793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:39.014 [2024-12-08 14:19:41.671802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:39.014 [2024-12-08 14:19:41.671809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:39.014 [2024-12-08 14:19:41.671817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:39.014 [2024-12-08 14:19:41.671825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:39.014 [2024-12-08 14:19:41.671833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:39.014 [2024-12-08 14:19:41.671841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:39.014 [2024-12-08 14:19:41.671849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:39.014 [2024-12-08 14:19:41.671858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:39.014 [2024-12-08 14:19:41.671865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:39.014 [2024-12-08 14:19:41.671873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:39.014 [2024-12-08 14:19:41.671881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:39.014 [2024-12-08 14:19:41.671889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:39.014 [2024-12-08 14:19:41.671898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:39.014 [2024-12-08 14:19:41.671905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:39.014 [2024-12-08 14:19:41.671913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:39.014 [2024-12-08 14:19:41.671921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:39.014 [2024-12-08 14:19:41.671928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:39.014 [2024-12-08 14:19:41.671936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:39.014 [2024-12-08 14:19:41.671944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:39.014 [2024-12-08 14:19:41.671952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:39.014 [2024-12-08 14:19:41.671960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:39.014 [2024-12-08 14:19:41.671968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:39.014 [2024-12-08 14:19:41.671976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:39.014 [2024-12-08 14:19:41.672007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:39.014 [2024-12-08 14:19:41.672015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:39.014 [2024-12-08 14:19:41.672023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:39.014 [2024-12-08 14:19:41.672033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:39.014 [2024-12-08 14:19:41.672041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:39.015 [2024-12-08 14:19:41.672050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:39.015 [2024-12-08 14:19:41.672058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:39.015 [2024-12-08 14:19:41.672067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:39.015 [2024-12-08 14:19:41.672075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:39.015 [2024-12-08 14:19:41.672083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:39.015 [2024-12-08 14:19:41.672091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:39.015 [2024-12-08 14:19:41.672100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:39.015 [2024-12-08 14:19:41.672108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:39.015 [2024-12-08 14:19:41.672117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:39.015 [2024-12-08 14:19:41.672125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:39.015 [2024-12-08 14:19:41.672133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:39.015 [2024-12-08 14:19:41.672149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:39.015 [2024-12-08 14:19:41.672157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:39.015 [2024-12-08 14:19:41.672165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:39.015 [2024-12-08 14:19:41.672172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:39.015 [2024-12-08 14:19:41.672179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:39.015 [2024-12-08 14:19:41.672186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:39.015 [2024-12-08 14:19:41.672194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:39.015 [2024-12-08 14:19:41.672201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:39.015 [2024-12-08 14:19:41.672209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:39.015 [2024-12-08 14:19:41.672217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:39.015 [2024-12-08 14:19:41.672224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:39.015 [2024-12-08 14:19:41.672232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:39.015 [2024-12-08 14:19:41.672240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:39.015 [2024-12-08 14:19:41.672248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:39.015 [2024-12-08 14:19:41.672256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:39.015 [2024-12-08 14:19:41.672264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:39.015 [2024-12-08 14:19:41.672271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:39.015 [2024-12-08 14:19:41.672279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:39.015 [2024-12-08 14:19:41.672287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:39.015 [2024-12-08 14:19:41.672302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:39.015 [2024-12-08 14:19:41.672310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:39.015 [2024-12-08 14:19:41.672318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:39.015 [2024-12-08 14:19:41.672325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:39.015 [2024-12-08 14:19:41.672338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:39.015 [2024-12-08 14:19:41.672346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:39.015 [2024-12-08 14:19:41.672362] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:39.015 [2024-12-08 14:19:41.672374] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4613a08b-5f26-4463-9b36-017e8defea61 00:24:39.015 [2024-12-08 14:19:41.672382] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 92160 00:24:39.015 [2024-12-08 14:19:41.672390] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 93120 00:24:39.015 [2024-12-08 14:19:41.672398] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 92160 00:24:39.015 [2024-12-08 14:19:41.672413] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0104 00:24:39.015 [2024-12-08 14:19:41.672420] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:39.015 [2024-12-08 14:19:41.672429] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:39.015 [2024-12-08 14:19:41.672437] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:39.015 [2024-12-08 14:19:41.672443] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:39.015 [2024-12-08 14:19:41.672450] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:39.015 [2024-12-08 14:19:41.672458] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.015 [2024-12-08 14:19:41.672467] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:39.015 [2024-12-08 14:19:41.672475] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.979 ms 00:24:39.015 [2024-12-08 14:19:41.672482] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.015 [2024-12-08 14:19:41.686038] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.015 [2024-12-08 14:19:41.686194] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:39.015 [2024-12-08 14:19:41.686211] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.518 ms 00:24:39.015 [2024-12-08 14:19:41.686219] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.015 [2024-12-08 14:19:41.686451] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.015 [2024-12-08 14:19:41.686461] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:39.015 [2024-12-08 14:19:41.686477] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:24:39.015 [2024-12-08 14:19:41.686485] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.015 [2024-12-08 14:19:41.725476] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:39.015 [2024-12-08 14:19:41.725639] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:39.015 [2024-12-08 14:19:41.725658] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:39.015 [2024-12-08 14:19:41.725668] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.015 [2024-12-08 14:19:41.725732] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:39.015 [2024-12-08 14:19:41.725741] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:39.015 [2024-12-08 14:19:41.725755] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:39.015 [2024-12-08 14:19:41.725763] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.015 [2024-12-08 14:19:41.725838] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:39.015 [2024-12-08 14:19:41.725848] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:39.015 [2024-12-08 14:19:41.725857] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:39.015 [2024-12-08 14:19:41.725865] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.015 [2024-12-08 14:19:41.725881] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:39.015 [2024-12-08 14:19:41.725890] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:39.015 [2024-12-08 14:19:41.725898] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:39.015 [2024-12-08 14:19:41.725909] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.015 [2024-12-08 14:19:41.805523] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:39.015 [2024-12-08 14:19:41.805578] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:39.015 [2024-12-08 14:19:41.805591] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:39.015 [2024-12-08 14:19:41.805600] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.015 [2024-12-08 14:19:41.837724] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:39.015 [2024-12-08 14:19:41.837769] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:39.015 [2024-12-08 14:19:41.837787] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:39.015 [2024-12-08 14:19:41.837796] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.015 [2024-12-08 14:19:41.837861] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:39.015 [2024-12-08 14:19:41.837879] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:39.015 [2024-12-08 14:19:41.837888] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:39.015 [2024-12-08 14:19:41.837896] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.015 [2024-12-08 14:19:41.837939] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:39.015 [2024-12-08 14:19:41.837950] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:39.015 [2024-12-08 14:19:41.837958] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:39.015 [2024-12-08 14:19:41.837966] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.015 [2024-12-08 14:19:41.838097] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:39.015 [2024-12-08 14:19:41.838109] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:39.015 [2024-12-08 14:19:41.838118] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:39.015 [2024-12-08 14:19:41.838126] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.015 [2024-12-08 14:19:41.838156] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:39.015 [2024-12-08 14:19:41.838166] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:39.015 [2024-12-08 14:19:41.838174] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:39.015 [2024-12-08 14:19:41.838183] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.015 [2024-12-08 14:19:41.838227] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:39.015 [2024-12-08 14:19:41.838236] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:39.015 [2024-12-08 14:19:41.838245] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:39.016 [2024-12-08 14:19:41.838254] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.016 [2024-12-08 14:19:41.838301] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:39.016 [2024-12-08 14:19:41.838310] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:39.016 [2024-12-08 14:19:41.838318] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:39.016 [2024-12-08 14:19:41.838326] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.016 [2024-12-08 14:19:41.838460] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 622.281 ms, result 0 00:24:40.403 00:24:40.403 00:24:40.403 14:19:43 -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:24:42.958 14:19:45 -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:42.958 [2024-12-08 14:19:45.424407] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:42.958 [2024-12-08 14:19:45.424714] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77355 ] 00:24:42.958 [2024-12-08 14:19:45.576673] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.958 [2024-12-08 14:19:45.795592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:43.218 [2024-12-08 14:19:46.083875] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:43.218 [2024-12-08 14:19:46.083962] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:43.479 [2024-12-08 14:19:46.238319] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.479 [2024-12-08 14:19:46.238544] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:43.479 [2024-12-08 14:19:46.238569] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:43.479 [2024-12-08 14:19:46.238584] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.479 [2024-12-08 14:19:46.238652] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.479 [2024-12-08 14:19:46.238664] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:43.479 [2024-12-08 14:19:46.238673] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:24:43.479 [2024-12-08 14:19:46.238681] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.479 [2024-12-08 14:19:46.238703] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:43.479 [2024-12-08 14:19:46.239486] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:43.479 [2024-12-08 14:19:46.239506] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.479 [2024-12-08 14:19:46.239516] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:43.480 [2024-12-08 14:19:46.239526] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.810 ms 00:24:43.480 [2024-12-08 14:19:46.239534] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.480 [2024-12-08 14:19:46.241267] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:43.480 [2024-12-08 14:19:46.255705] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.480 [2024-12-08 14:19:46.255749] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:43.480 [2024-12-08 14:19:46.255763] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.441 ms 00:24:43.480 [2024-12-08 14:19:46.255772] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.480 [2024-12-08 14:19:46.255845] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.480 [2024-12-08 14:19:46.255856] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:43.480 [2024-12-08 14:19:46.255865] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:24:43.480 [2024-12-08 14:19:46.255873] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.480 [2024-12-08 14:19:46.264329] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.480 [2024-12-08 14:19:46.264371] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:43.480 [2024-12-08 14:19:46.264382] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.379 ms 00:24:43.480 [2024-12-08 14:19:46.264391] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.480 [2024-12-08 14:19:46.264486] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.480 [2024-12-08 14:19:46.264496] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:43.480 [2024-12-08 14:19:46.264505] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:24:43.480 [2024-12-08 14:19:46.264513] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.480 [2024-12-08 14:19:46.264558] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.480 [2024-12-08 14:19:46.264568] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:43.480 [2024-12-08 14:19:46.264577] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:43.480 [2024-12-08 14:19:46.264584] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.480 [2024-12-08 14:19:46.264616] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:43.480 [2024-12-08 14:19:46.268859] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.480 [2024-12-08 14:19:46.268897] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:43.480 [2024-12-08 14:19:46.268908] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.257 ms 00:24:43.480 [2024-12-08 14:19:46.268916] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.480 [2024-12-08 14:19:46.268954] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.480 [2024-12-08 14:19:46.268964] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:43.480 [2024-12-08 14:19:46.268973] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:43.480 [2024-12-08 14:19:46.269002] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.480 [2024-12-08 14:19:46.269054] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:43.480 [2024-12-08 14:19:46.269091] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:24:43.480 [2024-12-08 14:19:46.269127] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:43.480 [2024-12-08 14:19:46.269143] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:24:43.480 [2024-12-08 14:19:46.269222] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:24:43.480 [2024-12-08 14:19:46.269233] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:43.480 [2024-12-08 14:19:46.269247] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:24:43.480 [2024-12-08 14:19:46.269258] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:43.480 [2024-12-08 14:19:46.269268] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:43.480 [2024-12-08 14:19:46.269276] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:43.480 [2024-12-08 14:19:46.269284] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:43.480 [2024-12-08 14:19:46.269291] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:24:43.480 [2024-12-08 14:19:46.269300] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:24:43.480 [2024-12-08 14:19:46.269308] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.480 [2024-12-08 14:19:46.269316] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:43.480 [2024-12-08 14:19:46.269324] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:24:43.480 [2024-12-08 14:19:46.269331] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.480 [2024-12-08 14:19:46.269399] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.480 [2024-12-08 14:19:46.269409] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:43.480 [2024-12-08 14:19:46.269416] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:24:43.480 [2024-12-08 14:19:46.269431] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.480 [2024-12-08 14:19:46.269501] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:43.480 [2024-12-08 14:19:46.269511] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:43.480 [2024-12-08 14:19:46.269519] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:43.480 [2024-12-08 14:19:46.269528] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:43.480 [2024-12-08 14:19:46.269536] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:43.480 [2024-12-08 14:19:46.269544] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:43.480 [2024-12-08 14:19:46.269551] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:43.480 [2024-12-08 14:19:46.269558] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:43.480 [2024-12-08 14:19:46.269565] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:43.480 [2024-12-08 14:19:46.269571] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:43.480 [2024-12-08 14:19:46.269578] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:43.480 [2024-12-08 14:19:46.269587] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:43.480 [2024-12-08 14:19:46.269594] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:43.480 [2024-12-08 14:19:46.269601] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:43.480 [2024-12-08 14:19:46.269608] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:24:43.480 [2024-12-08 14:19:46.269614] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:43.480 [2024-12-08 14:19:46.269628] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:43.480 [2024-12-08 14:19:46.269635] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:24:43.480 [2024-12-08 14:19:46.269642] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:43.480 [2024-12-08 14:19:46.269649] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:24:43.480 [2024-12-08 14:19:46.269656] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:24:43.480 [2024-12-08 14:19:46.269664] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:24:43.480 [2024-12-08 14:19:46.269671] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:43.480 [2024-12-08 14:19:46.269678] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:43.480 [2024-12-08 14:19:46.269686] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:43.480 [2024-12-08 14:19:46.269693] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:43.480 [2024-12-08 14:19:46.269701] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:24:43.480 [2024-12-08 14:19:46.269708] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:43.480 [2024-12-08 14:19:46.269715] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:43.480 [2024-12-08 14:19:46.269721] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:43.480 [2024-12-08 14:19:46.269728] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:43.480 [2024-12-08 14:19:46.269734] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:43.480 [2024-12-08 14:19:46.269740] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:24:43.480 [2024-12-08 14:19:46.269747] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:43.480 [2024-12-08 14:19:46.269753] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:43.480 [2024-12-08 14:19:46.269760] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:43.480 [2024-12-08 14:19:46.269766] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:43.480 [2024-12-08 14:19:46.269772] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:43.480 [2024-12-08 14:19:46.269778] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:24:43.480 [2024-12-08 14:19:46.269784] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:43.480 [2024-12-08 14:19:46.269790] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:43.480 [2024-12-08 14:19:46.269801] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:43.480 [2024-12-08 14:19:46.269811] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:43.480 [2024-12-08 14:19:46.269821] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:43.480 [2024-12-08 14:19:46.269829] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:43.480 [2024-12-08 14:19:46.269839] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:43.480 [2024-12-08 14:19:46.269846] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:43.480 [2024-12-08 14:19:46.269853] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:43.480 [2024-12-08 14:19:46.269859] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:43.480 [2024-12-08 14:19:46.269866] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:43.481 [2024-12-08 14:19:46.269874] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:43.481 [2024-12-08 14:19:46.269884] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:43.481 [2024-12-08 14:19:46.269892] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:43.481 [2024-12-08 14:19:46.269899] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:24:43.481 [2024-12-08 14:19:46.269906] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:24:43.481 [2024-12-08 14:19:46.269913] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:24:43.481 [2024-12-08 14:19:46.269920] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:24:43.481 [2024-12-08 14:19:46.269927] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:24:43.481 [2024-12-08 14:19:46.269934] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:24:43.481 [2024-12-08 14:19:46.269940] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:24:43.481 [2024-12-08 14:19:46.269948] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:24:43.481 [2024-12-08 14:19:46.269955] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:24:43.481 [2024-12-08 14:19:46.269962] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:24:43.481 [2024-12-08 14:19:46.269969] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:24:43.481 [2024-12-08 14:19:46.269977] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:24:43.481 [2024-12-08 14:19:46.269999] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:43.481 [2024-12-08 14:19:46.270007] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:43.481 [2024-12-08 14:19:46.270016] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:43.481 [2024-12-08 14:19:46.270024] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:43.481 [2024-12-08 14:19:46.270031] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:43.481 [2024-12-08 14:19:46.270041] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:43.481 [2024-12-08 14:19:46.270049] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.481 [2024-12-08 14:19:46.270057] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:43.481 [2024-12-08 14:19:46.270065] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.591 ms 00:24:43.481 [2024-12-08 14:19:46.270072] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.481 [2024-12-08 14:19:46.288592] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.481 [2024-12-08 14:19:46.288772] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:43.481 [2024-12-08 14:19:46.288838] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.473 ms 00:24:43.481 [2024-12-08 14:19:46.288873] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.481 [2024-12-08 14:19:46.289370] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.481 [2024-12-08 14:19:46.289412] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:43.481 [2024-12-08 14:19:46.289426] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:24:43.481 [2024-12-08 14:19:46.289436] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.481 [2024-12-08 14:19:46.337072] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.481 [2024-12-08 14:19:46.337126] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:43.481 [2024-12-08 14:19:46.337139] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.572 ms 00:24:43.481 [2024-12-08 14:19:46.337148] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.481 [2024-12-08 14:19:46.337199] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.481 [2024-12-08 14:19:46.337210] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:43.481 [2024-12-08 14:19:46.337220] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:43.481 [2024-12-08 14:19:46.337228] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.481 [2024-12-08 14:19:46.337802] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.481 [2024-12-08 14:19:46.337836] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:43.481 [2024-12-08 14:19:46.337848] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.521 ms 00:24:43.481 [2024-12-08 14:19:46.337862] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.481 [2024-12-08 14:19:46.338008] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.481 [2024-12-08 14:19:46.338025] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:43.481 [2024-12-08 14:19:46.338034] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:24:43.481 [2024-12-08 14:19:46.338043] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.481 [2024-12-08 14:19:46.354634] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.481 [2024-12-08 14:19:46.354677] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:43.481 [2024-12-08 14:19:46.354688] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.566 ms 00:24:43.481 [2024-12-08 14:19:46.354697] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.481 [2024-12-08 14:19:46.369107] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:24:43.481 [2024-12-08 14:19:46.369281] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:43.481 [2024-12-08 14:19:46.369301] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.481 [2024-12-08 14:19:46.369309] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:43.481 [2024-12-08 14:19:46.369320] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.495 ms 00:24:43.481 [2024-12-08 14:19:46.369326] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.481 [2024-12-08 14:19:46.395101] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.481 [2024-12-08 14:19:46.395150] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:43.481 [2024-12-08 14:19:46.395162] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.733 ms 00:24:43.481 [2024-12-08 14:19:46.395170] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.742 [2024-12-08 14:19:46.407964] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.742 [2024-12-08 14:19:46.408015] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:43.742 [2024-12-08 14:19:46.408027] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.739 ms 00:24:43.742 [2024-12-08 14:19:46.408035] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.742 [2024-12-08 14:19:46.420548] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.742 [2024-12-08 14:19:46.420590] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:43.742 [2024-12-08 14:19:46.420613] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.468 ms 00:24:43.742 [2024-12-08 14:19:46.420621] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.742 [2024-12-08 14:19:46.421031] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.742 [2024-12-08 14:19:46.421045] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:43.742 [2024-12-08 14:19:46.421067] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:24:43.742 [2024-12-08 14:19:46.421076] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.742 [2024-12-08 14:19:46.487042] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.742 [2024-12-08 14:19:46.487265] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:43.742 [2024-12-08 14:19:46.487290] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.947 ms 00:24:43.742 [2024-12-08 14:19:46.487300] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.742 [2024-12-08 14:19:46.498751] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:43.742 [2024-12-08 14:19:46.501961] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.742 [2024-12-08 14:19:46.502110] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:43.742 [2024-12-08 14:19:46.502166] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.614 ms 00:24:43.742 [2024-12-08 14:19:46.502200] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.742 [2024-12-08 14:19:46.502293] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.742 [2024-12-08 14:19:46.502321] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:43.742 [2024-12-08 14:19:46.502343] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:43.742 [2024-12-08 14:19:46.502363] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.742 [2024-12-08 14:19:46.503697] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.742 [2024-12-08 14:19:46.503848] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:43.742 [2024-12-08 14:19:46.503904] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.284 ms 00:24:43.742 [2024-12-08 14:19:46.503928] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.742 [2024-12-08 14:19:46.505308] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.742 [2024-12-08 14:19:46.505450] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:24:43.742 [2024-12-08 14:19:46.505469] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.310 ms 00:24:43.742 [2024-12-08 14:19:46.505477] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.742 [2024-12-08 14:19:46.505515] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.742 [2024-12-08 14:19:46.505524] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:43.742 [2024-12-08 14:19:46.505540] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:43.742 [2024-12-08 14:19:46.505549] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.742 [2024-12-08 14:19:46.505586] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:43.742 [2024-12-08 14:19:46.505597] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.742 [2024-12-08 14:19:46.505608] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:43.742 [2024-12-08 14:19:46.505617] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:43.742 [2024-12-08 14:19:46.505625] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.742 [2024-12-08 14:19:46.531879] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.742 [2024-12-08 14:19:46.532056] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:43.742 [2024-12-08 14:19:46.532124] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.236 ms 00:24:43.742 [2024-12-08 14:19:46.532148] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.742 [2024-12-08 14:19:46.532239] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.742 [2024-12-08 14:19:46.532267] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:43.742 [2024-12-08 14:19:46.532288] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:24:43.742 [2024-12-08 14:19:46.532307] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.742 [2024-12-08 14:19:46.539166] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 297.696 ms, result 0 00:24:45.124  [2024-12-08T14:19:48.986Z] Copying: 1128/1048576 [kB] (1128 kBps) [2024-12-08T14:19:49.930Z] Copying: 3840/1048576 [kB] (2712 kBps) [2024-12-08T14:19:50.874Z] Copying: 18/1024 [MB] (14 MBps) [2024-12-08T14:19:51.810Z] Copying: 43/1024 [MB] (24 MBps) [2024-12-08T14:19:52.756Z] Copying: 89/1024 [MB] (45 MBps) [2024-12-08T14:19:54.141Z] Copying: 116/1024 [MB] (27 MBps) [2024-12-08T14:19:55.086Z] Copying: 145/1024 [MB] (28 MBps) [2024-12-08T14:19:56.031Z] Copying: 177/1024 [MB] (31 MBps) [2024-12-08T14:19:56.976Z] Copying: 205/1024 [MB] (28 MBps) [2024-12-08T14:19:57.921Z] Copying: 233/1024 [MB] (28 MBps) [2024-12-08T14:19:58.867Z] Copying: 259/1024 [MB] (25 MBps) [2024-12-08T14:19:59.813Z] Copying: 288/1024 [MB] (28 MBps) [2024-12-08T14:20:00.760Z] Copying: 310/1024 [MB] (22 MBps) [2024-12-08T14:20:02.148Z] Copying: 336/1024 [MB] (26 MBps) [2024-12-08T14:20:03.092Z] Copying: 358/1024 [MB] (21 MBps) [2024-12-08T14:20:04.036Z] Copying: 384/1024 [MB] (25 MBps) [2024-12-08T14:20:04.996Z] Copying: 399/1024 [MB] (15 MBps) [2024-12-08T14:20:05.973Z] Copying: 423/1024 [MB] (24 MBps) [2024-12-08T14:20:06.958Z] Copying: 448/1024 [MB] (24 MBps) [2024-12-08T14:20:07.896Z] Copying: 474/1024 [MB] (26 MBps) [2024-12-08T14:20:08.835Z] Copying: 491/1024 [MB] (17 MBps) [2024-12-08T14:20:09.777Z] Copying: 507/1024 [MB] (16 MBps) [2024-12-08T14:20:11.163Z] Copying: 535/1024 [MB] (28 MBps) [2024-12-08T14:20:11.735Z] Copying: 557/1024 [MB] (21 MBps) [2024-12-08T14:20:13.119Z] Copying: 575/1024 [MB] (17 MBps) [2024-12-08T14:20:14.059Z] Copying: 597/1024 [MB] (21 MBps) [2024-12-08T14:20:14.999Z] Copying: 619/1024 [MB] (22 MBps) [2024-12-08T14:20:15.943Z] Copying: 649/1024 [MB] (29 MBps) [2024-12-08T14:20:16.888Z] Copying: 670/1024 [MB] (20 MBps) [2024-12-08T14:20:17.849Z] Copying: 696/1024 [MB] (26 MBps) [2024-12-08T14:20:18.794Z] Copying: 720/1024 [MB] (24 MBps) [2024-12-08T14:20:19.741Z] Copying: 736/1024 [MB] (15 MBps) [2024-12-08T14:20:21.125Z] Copying: 752/1024 [MB] (15 MBps) [2024-12-08T14:20:22.071Z] Copying: 777/1024 [MB] (25 MBps) [2024-12-08T14:20:23.016Z] Copying: 796/1024 [MB] (18 MBps) [2024-12-08T14:20:23.961Z] Copying: 812/1024 [MB] (16 MBps) [2024-12-08T14:20:24.901Z] Copying: 828/1024 [MB] (16 MBps) [2024-12-08T14:20:25.842Z] Copying: 847/1024 [MB] (18 MBps) [2024-12-08T14:20:26.785Z] Copying: 866/1024 [MB] (19 MBps) [2024-12-08T14:20:27.728Z] Copying: 884/1024 [MB] (18 MBps) [2024-12-08T14:20:29.117Z] Copying: 900/1024 [MB] (15 MBps) [2024-12-08T14:20:30.063Z] Copying: 915/1024 [MB] (15 MBps) [2024-12-08T14:20:31.007Z] Copying: 930/1024 [MB] (15 MBps) [2024-12-08T14:20:31.951Z] Copying: 948/1024 [MB] (17 MBps) [2024-12-08T14:20:32.894Z] Copying: 966/1024 [MB] (18 MBps) [2024-12-08T14:20:33.838Z] Copying: 984/1024 [MB] (18 MBps) [2024-12-08T14:20:34.781Z] Copying: 1003/1024 [MB] (18 MBps) [2024-12-08T14:20:35.042Z] Copying: 1020/1024 [MB] (17 MBps) [2024-12-08T14:20:35.309Z] Copying: 1024/1024 [MB] (average 21 MBps)[2024-12-08 14:20:35.098194] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.389 [2024-12-08 14:20:35.098325] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:32.389 [2024-12-08 14:20:35.098364] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:32.389 [2024-12-08 14:20:35.098390] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.389 [2024-12-08 14:20:35.098452] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:32.389 [2024-12-08 14:20:35.105838] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.389 [2024-12-08 14:20:35.105968] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:32.389 [2024-12-08 14:20:35.106044] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.347 ms 00:25:32.389 [2024-12-08 14:20:35.106066] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.389 [2024-12-08 14:20:35.106312] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.389 [2024-12-08 14:20:35.106340] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:32.389 [2024-12-08 14:20:35.106415] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.199 ms 00:25:32.389 [2024-12-08 14:20:35.106433] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.389 [2024-12-08 14:20:35.120379] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.389 [2024-12-08 14:20:35.120536] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:32.389 [2024-12-08 14:20:35.120594] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.909 ms 00:25:32.389 [2024-12-08 14:20:35.120614] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.389 [2024-12-08 14:20:35.125784] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.389 [2024-12-08 14:20:35.125947] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:25:32.389 [2024-12-08 14:20:35.126036] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.767 ms 00:25:32.389 [2024-12-08 14:20:35.126061] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.389 [2024-12-08 14:20:35.147620] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.389 [2024-12-08 14:20:35.147759] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:32.389 [2024-12-08 14:20:35.147812] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.493 ms 00:25:32.389 [2024-12-08 14:20:35.147830] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.389 [2024-12-08 14:20:35.160783] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.389 [2024-12-08 14:20:35.160905] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:32.389 [2024-12-08 14:20:35.160952] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.833 ms 00:25:32.389 [2024-12-08 14:20:35.160971] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.389 [2024-12-08 14:20:35.166993] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.389 [2024-12-08 14:20:35.167089] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:32.389 [2024-12-08 14:20:35.167138] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.971 ms 00:25:32.389 [2024-12-08 14:20:35.167156] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.389 [2024-12-08 14:20:35.186674] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.389 [2024-12-08 14:20:35.186775] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:25:32.389 [2024-12-08 14:20:35.186787] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.494 ms 00:25:32.389 [2024-12-08 14:20:35.186793] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.389 [2024-12-08 14:20:35.205972] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.389 [2024-12-08 14:20:35.206017] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:25:32.389 [2024-12-08 14:20:35.206026] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.155 ms 00:25:32.389 [2024-12-08 14:20:35.206040] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.389 [2024-12-08 14:20:35.224097] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.389 [2024-12-08 14:20:35.224123] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:32.389 [2024-12-08 14:20:35.224131] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.025 ms 00:25:32.389 [2024-12-08 14:20:35.224137] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.389 [2024-12-08 14:20:35.242281] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.389 [2024-12-08 14:20:35.242314] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:32.389 [2024-12-08 14:20:35.242322] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.091 ms 00:25:32.389 [2024-12-08 14:20:35.242328] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.389 [2024-12-08 14:20:35.242354] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:32.389 [2024-12-08 14:20:35.242366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:32.389 [2024-12-08 14:20:35.242374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 4096 / 261120 wr_cnt: 1 state: open 00:25:32.389 [2024-12-08 14:20:35.242380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:32.389 [2024-12-08 14:20:35.242387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:32.389 [2024-12-08 14:20:35.242393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:32.389 [2024-12-08 14:20:35.242398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:32.389 [2024-12-08 14:20:35.242404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:32.389 [2024-12-08 14:20:35.242410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:32.389 [2024-12-08 14:20:35.242416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:32.389 [2024-12-08 14:20:35.242421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:32.389 [2024-12-08 14:20:35.242427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:32.390 [2024-12-08 14:20:35.242812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:32.391 [2024-12-08 14:20:35.242820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:32.391 [2024-12-08 14:20:35.242826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:32.391 [2024-12-08 14:20:35.242831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:32.391 [2024-12-08 14:20:35.242837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:32.391 [2024-12-08 14:20:35.242843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:32.391 [2024-12-08 14:20:35.242849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:32.391 [2024-12-08 14:20:35.242854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:32.391 [2024-12-08 14:20:35.242860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:32.391 [2024-12-08 14:20:35.242867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:32.391 [2024-12-08 14:20:35.242873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:32.391 [2024-12-08 14:20:35.242878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:32.391 [2024-12-08 14:20:35.242884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:32.391 [2024-12-08 14:20:35.242889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:32.391 [2024-12-08 14:20:35.242895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:32.391 [2024-12-08 14:20:35.242900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:32.391 [2024-12-08 14:20:35.242906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:32.391 [2024-12-08 14:20:35.242912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:32.391 [2024-12-08 14:20:35.242918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:32.391 [2024-12-08 14:20:35.242924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:32.391 [2024-12-08 14:20:35.242929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:32.391 [2024-12-08 14:20:35.242935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:32.391 [2024-12-08 14:20:35.242941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:32.391 [2024-12-08 14:20:35.242947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:32.391 [2024-12-08 14:20:35.242960] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:32.391 [2024-12-08 14:20:35.242966] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4613a08b-5f26-4463-9b36-017e8defea61 00:25:32.391 [2024-12-08 14:20:35.242972] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 265216 00:25:32.391 [2024-12-08 14:20:35.242997] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 175040 00:25:32.391 [2024-12-08 14:20:35.243003] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 173056 00:25:32.391 [2024-12-08 14:20:35.243010] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0115 00:25:32.391 [2024-12-08 14:20:35.243015] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:32.391 [2024-12-08 14:20:35.243022] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:32.391 [2024-12-08 14:20:35.243028] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:32.391 [2024-12-08 14:20:35.243033] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:32.391 [2024-12-08 14:20:35.243044] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:32.391 [2024-12-08 14:20:35.243050] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.391 [2024-12-08 14:20:35.243056] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:32.391 [2024-12-08 14:20:35.243063] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.697 ms 00:25:32.391 [2024-12-08 14:20:35.243069] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.391 [2024-12-08 14:20:35.253061] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.391 [2024-12-08 14:20:35.253158] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:32.391 [2024-12-08 14:20:35.253170] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.971 ms 00:25:32.391 [2024-12-08 14:20:35.253176] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.391 [2024-12-08 14:20:35.253336] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.391 [2024-12-08 14:20:35.253342] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:32.391 [2024-12-08 14:20:35.253349] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:25:32.391 [2024-12-08 14:20:35.253359] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.391 [2024-12-08 14:20:35.282875] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.391 [2024-12-08 14:20:35.282902] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:32.391 [2024-12-08 14:20:35.282910] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.391 [2024-12-08 14:20:35.282917] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.391 [2024-12-08 14:20:35.282957] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.391 [2024-12-08 14:20:35.282963] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:32.391 [2024-12-08 14:20:35.282969] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.391 [2024-12-08 14:20:35.282979] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.391 [2024-12-08 14:20:35.283049] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.391 [2024-12-08 14:20:35.283057] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:32.391 [2024-12-08 14:20:35.283064] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.391 [2024-12-08 14:20:35.283070] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.391 [2024-12-08 14:20:35.283083] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.391 [2024-12-08 14:20:35.283090] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:32.391 [2024-12-08 14:20:35.283096] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.391 [2024-12-08 14:20:35.283101] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.656 [2024-12-08 14:20:35.344051] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.657 [2024-12-08 14:20:35.344096] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:32.657 [2024-12-08 14:20:35.344106] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.657 [2024-12-08 14:20:35.344113] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.657 [2024-12-08 14:20:35.368394] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.657 [2024-12-08 14:20:35.368422] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:32.657 [2024-12-08 14:20:35.368430] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.657 [2024-12-08 14:20:35.368436] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.657 [2024-12-08 14:20:35.368489] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.657 [2024-12-08 14:20:35.368496] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:32.657 [2024-12-08 14:20:35.368503] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.657 [2024-12-08 14:20:35.368510] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.657 [2024-12-08 14:20:35.368542] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.657 [2024-12-08 14:20:35.368549] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:32.657 [2024-12-08 14:20:35.368555] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.657 [2024-12-08 14:20:35.368561] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.657 [2024-12-08 14:20:35.368635] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.657 [2024-12-08 14:20:35.368646] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:32.657 [2024-12-08 14:20:35.368652] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.657 [2024-12-08 14:20:35.368659] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.657 [2024-12-08 14:20:35.368687] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.657 [2024-12-08 14:20:35.368693] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:32.657 [2024-12-08 14:20:35.368700] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.657 [2024-12-08 14:20:35.368706] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.657 [2024-12-08 14:20:35.368741] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.657 [2024-12-08 14:20:35.368751] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:32.657 [2024-12-08 14:20:35.368757] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.657 [2024-12-08 14:20:35.368763] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.657 [2024-12-08 14:20:35.368806] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.657 [2024-12-08 14:20:35.368813] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:32.657 [2024-12-08 14:20:35.368819] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.657 [2024-12-08 14:20:35.368825] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.657 [2024-12-08 14:20:35.368929] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 270.757 ms, result 0 00:25:33.597 00:25:33.597 00:25:33.597 14:20:36 -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:35.576 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:35.576 14:20:38 -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:35.576 [2024-12-08 14:20:38.372601] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:35.576 [2024-12-08 14:20:38.372687] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77899 ] 00:25:35.836 [2024-12-08 14:20:38.511744] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:35.836 [2024-12-08 14:20:38.682607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:36.097 [2024-12-08 14:20:38.909973] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:36.097 [2024-12-08 14:20:38.910040] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:36.361 [2024-12-08 14:20:39.059191] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.361 [2024-12-08 14:20:39.059229] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:36.361 [2024-12-08 14:20:39.059240] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:36.361 [2024-12-08 14:20:39.059248] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.361 [2024-12-08 14:20:39.059287] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.361 [2024-12-08 14:20:39.059295] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:36.361 [2024-12-08 14:20:39.059302] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:25:36.361 [2024-12-08 14:20:39.059307] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.361 [2024-12-08 14:20:39.059320] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:36.361 [2024-12-08 14:20:39.059885] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:36.361 [2024-12-08 14:20:39.059897] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.361 [2024-12-08 14:20:39.059904] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:36.361 [2024-12-08 14:20:39.059911] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.580 ms 00:25:36.361 [2024-12-08 14:20:39.059917] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.361 [2024-12-08 14:20:39.061212] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:36.361 [2024-12-08 14:20:39.071948] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.361 [2024-12-08 14:20:39.071977] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:36.361 [2024-12-08 14:20:39.071996] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.738 ms 00:25:36.361 [2024-12-08 14:20:39.072003] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.361 [2024-12-08 14:20:39.072053] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.361 [2024-12-08 14:20:39.072060] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:36.361 [2024-12-08 14:20:39.072067] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:25:36.361 [2024-12-08 14:20:39.072072] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.361 [2024-12-08 14:20:39.078505] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.361 [2024-12-08 14:20:39.078531] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:36.361 [2024-12-08 14:20:39.078538] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.385 ms 00:25:36.361 [2024-12-08 14:20:39.078544] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.361 [2024-12-08 14:20:39.078612] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.361 [2024-12-08 14:20:39.078620] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:36.361 [2024-12-08 14:20:39.078627] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:25:36.361 [2024-12-08 14:20:39.078633] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.361 [2024-12-08 14:20:39.078669] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.361 [2024-12-08 14:20:39.078676] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:36.361 [2024-12-08 14:20:39.078683] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:36.361 [2024-12-08 14:20:39.078688] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.361 [2024-12-08 14:20:39.078712] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:36.361 [2024-12-08 14:20:39.081862] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.361 [2024-12-08 14:20:39.081886] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:36.361 [2024-12-08 14:20:39.081894] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.160 ms 00:25:36.361 [2024-12-08 14:20:39.081900] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.361 [2024-12-08 14:20:39.081929] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.361 [2024-12-08 14:20:39.081936] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:36.361 [2024-12-08 14:20:39.081943] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:36.361 [2024-12-08 14:20:39.081950] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.361 [2024-12-08 14:20:39.081966] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:36.361 [2024-12-08 14:20:39.081997] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:25:36.362 [2024-12-08 14:20:39.082025] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:36.362 [2024-12-08 14:20:39.082039] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:25:36.362 [2024-12-08 14:20:39.082099] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:25:36.362 [2024-12-08 14:20:39.082107] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:36.362 [2024-12-08 14:20:39.082116] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:25:36.362 [2024-12-08 14:20:39.082124] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:36.362 [2024-12-08 14:20:39.082132] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:36.362 [2024-12-08 14:20:39.082138] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:36.362 [2024-12-08 14:20:39.082144] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:36.362 [2024-12-08 14:20:39.082150] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:25:36.362 [2024-12-08 14:20:39.082156] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:25:36.362 [2024-12-08 14:20:39.082162] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.362 [2024-12-08 14:20:39.082169] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:36.362 [2024-12-08 14:20:39.082175] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:25:36.362 [2024-12-08 14:20:39.082180] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.362 [2024-12-08 14:20:39.082229] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.362 [2024-12-08 14:20:39.082236] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:36.362 [2024-12-08 14:20:39.082242] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:25:36.362 [2024-12-08 14:20:39.082247] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.362 [2024-12-08 14:20:39.082306] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:36.362 [2024-12-08 14:20:39.082315] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:36.362 [2024-12-08 14:20:39.082321] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:36.362 [2024-12-08 14:20:39.082327] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:36.362 [2024-12-08 14:20:39.082333] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:36.362 [2024-12-08 14:20:39.082339] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:36.362 [2024-12-08 14:20:39.082345] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:36.362 [2024-12-08 14:20:39.082350] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:36.362 [2024-12-08 14:20:39.082356] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:36.362 [2024-12-08 14:20:39.082361] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:36.362 [2024-12-08 14:20:39.082367] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:36.362 [2024-12-08 14:20:39.082373] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:36.362 [2024-12-08 14:20:39.082378] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:36.362 [2024-12-08 14:20:39.082383] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:36.362 [2024-12-08 14:20:39.082389] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:25:36.362 [2024-12-08 14:20:39.082394] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:36.362 [2024-12-08 14:20:39.082405] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:36.362 [2024-12-08 14:20:39.082410] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:25:36.362 [2024-12-08 14:20:39.082415] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:36.362 [2024-12-08 14:20:39.082420] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:25:36.362 [2024-12-08 14:20:39.082425] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:25:36.362 [2024-12-08 14:20:39.082430] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:25:36.362 [2024-12-08 14:20:39.082435] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:36.362 [2024-12-08 14:20:39.082440] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:36.362 [2024-12-08 14:20:39.082445] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:36.362 [2024-12-08 14:20:39.082450] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:36.362 [2024-12-08 14:20:39.082455] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:25:36.362 [2024-12-08 14:20:39.082460] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:36.362 [2024-12-08 14:20:39.082465] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:36.362 [2024-12-08 14:20:39.082470] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:36.362 [2024-12-08 14:20:39.082475] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:36.362 [2024-12-08 14:20:39.082480] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:36.362 [2024-12-08 14:20:39.082485] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:25:36.362 [2024-12-08 14:20:39.082491] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:36.362 [2024-12-08 14:20:39.082497] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:36.362 [2024-12-08 14:20:39.082502] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:36.362 [2024-12-08 14:20:39.082507] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:36.362 [2024-12-08 14:20:39.082512] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:36.362 [2024-12-08 14:20:39.082516] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:25:36.362 [2024-12-08 14:20:39.082521] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:36.362 [2024-12-08 14:20:39.082526] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:36.362 [2024-12-08 14:20:39.082534] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:36.362 [2024-12-08 14:20:39.082541] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:36.362 [2024-12-08 14:20:39.082547] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:36.362 [2024-12-08 14:20:39.082553] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:36.362 [2024-12-08 14:20:39.082559] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:36.362 [2024-12-08 14:20:39.082565] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:36.362 [2024-12-08 14:20:39.082577] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:36.362 [2024-12-08 14:20:39.082582] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:36.362 [2024-12-08 14:20:39.082587] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:36.362 [2024-12-08 14:20:39.082593] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:36.362 [2024-12-08 14:20:39.082600] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:36.362 [2024-12-08 14:20:39.082611] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:36.362 [2024-12-08 14:20:39.082616] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:25:36.362 [2024-12-08 14:20:39.082622] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:25:36.362 [2024-12-08 14:20:39.082629] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:25:36.362 [2024-12-08 14:20:39.082640] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:25:36.362 [2024-12-08 14:20:39.082645] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:25:36.362 [2024-12-08 14:20:39.082651] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:25:36.362 [2024-12-08 14:20:39.082656] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:25:36.362 [2024-12-08 14:20:39.082665] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:25:36.362 [2024-12-08 14:20:39.082671] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:25:36.362 [2024-12-08 14:20:39.082680] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:25:36.362 [2024-12-08 14:20:39.082686] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:25:36.362 [2024-12-08 14:20:39.082694] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:25:36.362 [2024-12-08 14:20:39.082700] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:36.362 [2024-12-08 14:20:39.082710] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:36.362 [2024-12-08 14:20:39.082719] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:36.362 [2024-12-08 14:20:39.082725] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:36.362 [2024-12-08 14:20:39.082733] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:36.362 [2024-12-08 14:20:39.082739] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:36.362 [2024-12-08 14:20:39.082748] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.362 [2024-12-08 14:20:39.082753] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:36.362 [2024-12-08 14:20:39.082759] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.476 ms 00:25:36.362 [2024-12-08 14:20:39.082770] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.362 [2024-12-08 14:20:39.097090] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.362 [2024-12-08 14:20:39.097118] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:36.362 [2024-12-08 14:20:39.097127] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.279 ms 00:25:36.362 [2024-12-08 14:20:39.097137] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.363 [2024-12-08 14:20:39.097206] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.363 [2024-12-08 14:20:39.097213] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:36.363 [2024-12-08 14:20:39.097220] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:25:36.363 [2024-12-08 14:20:39.097227] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.363 [2024-12-08 14:20:39.136331] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.363 [2024-12-08 14:20:39.136366] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:36.363 [2024-12-08 14:20:39.136377] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.069 ms 00:25:36.363 [2024-12-08 14:20:39.136385] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.363 [2024-12-08 14:20:39.136420] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.363 [2024-12-08 14:20:39.136429] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:36.363 [2024-12-08 14:20:39.136436] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:25:36.363 [2024-12-08 14:20:39.136443] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.363 [2024-12-08 14:20:39.136852] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.363 [2024-12-08 14:20:39.136867] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:36.363 [2024-12-08 14:20:39.136874] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.373 ms 00:25:36.363 [2024-12-08 14:20:39.136884] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.363 [2024-12-08 14:20:39.136997] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.363 [2024-12-08 14:20:39.137006] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:36.363 [2024-12-08 14:20:39.137013] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:25:36.363 [2024-12-08 14:20:39.137019] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.363 [2024-12-08 14:20:39.149777] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.363 [2024-12-08 14:20:39.149804] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:36.363 [2024-12-08 14:20:39.149813] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.741 ms 00:25:36.363 [2024-12-08 14:20:39.149819] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.363 [2024-12-08 14:20:39.160768] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:36.363 [2024-12-08 14:20:39.160796] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:36.363 [2024-12-08 14:20:39.160805] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.363 [2024-12-08 14:20:39.160811] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:36.363 [2024-12-08 14:20:39.160818] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.912 ms 00:25:36.363 [2024-12-08 14:20:39.160824] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.363 [2024-12-08 14:20:39.179897] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.363 [2024-12-08 14:20:39.179925] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:36.363 [2024-12-08 14:20:39.179934] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.041 ms 00:25:36.363 [2024-12-08 14:20:39.179941] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.363 [2024-12-08 14:20:39.189379] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.363 [2024-12-08 14:20:39.189405] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:36.363 [2024-12-08 14:20:39.189412] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.403 ms 00:25:36.363 [2024-12-08 14:20:39.189418] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.363 [2024-12-08 14:20:39.198769] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.363 [2024-12-08 14:20:39.198799] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:36.363 [2024-12-08 14:20:39.198806] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.324 ms 00:25:36.363 [2024-12-08 14:20:39.198812] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.363 [2024-12-08 14:20:39.199105] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.363 [2024-12-08 14:20:39.199116] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:36.363 [2024-12-08 14:20:39.199145] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.235 ms 00:25:36.363 [2024-12-08 14:20:39.199151] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.363 [2024-12-08 14:20:39.249067] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.363 [2024-12-08 14:20:39.249103] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:36.363 [2024-12-08 14:20:39.249113] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.903 ms 00:25:36.363 [2024-12-08 14:20:39.249120] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.363 [2024-12-08 14:20:39.257711] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:36.363 [2024-12-08 14:20:39.260120] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.363 [2024-12-08 14:20:39.260144] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:36.363 [2024-12-08 14:20:39.260153] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.960 ms 00:25:36.363 [2024-12-08 14:20:39.260164] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.363 [2024-12-08 14:20:39.260217] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.363 [2024-12-08 14:20:39.260226] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:36.363 [2024-12-08 14:20:39.260234] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:36.363 [2024-12-08 14:20:39.260240] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.363 [2024-12-08 14:20:39.260875] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.363 [2024-12-08 14:20:39.260897] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:36.363 [2024-12-08 14:20:39.260904] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.611 ms 00:25:36.363 [2024-12-08 14:20:39.260910] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.363 [2024-12-08 14:20:39.261965] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.363 [2024-12-08 14:20:39.262110] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:25:36.363 [2024-12-08 14:20:39.262124] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.033 ms 00:25:36.363 [2024-12-08 14:20:39.262130] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.363 [2024-12-08 14:20:39.262158] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.363 [2024-12-08 14:20:39.262164] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:36.363 [2024-12-08 14:20:39.262177] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:36.363 [2024-12-08 14:20:39.262182] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.363 [2024-12-08 14:20:39.262211] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:36.363 [2024-12-08 14:20:39.262218] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.363 [2024-12-08 14:20:39.262226] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:36.363 [2024-12-08 14:20:39.262232] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:36.363 [2024-12-08 14:20:39.262238] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.624 [2024-12-08 14:20:39.281620] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.625 [2024-12-08 14:20:39.281727] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:36.625 [2024-12-08 14:20:39.281741] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.369 ms 00:25:36.625 [2024-12-08 14:20:39.281748] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.625 [2024-12-08 14:20:39.281805] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.625 [2024-12-08 14:20:39.281813] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:36.625 [2024-12-08 14:20:39.281819] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:25:36.625 [2024-12-08 14:20:39.281825] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.625 [2024-12-08 14:20:39.282741] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 223.172 ms, result 0 00:25:37.570  [2024-12-08T14:20:41.436Z] Copying: 19/1024 [MB] (19 MBps) [2024-12-08T14:20:42.826Z] Copying: 36/1024 [MB] (17 MBps) [2024-12-08T14:20:43.771Z] Copying: 47/1024 [MB] (11 MBps) [2024-12-08T14:20:44.732Z] Copying: 61/1024 [MB] (13 MBps) [2024-12-08T14:20:45.677Z] Copying: 78/1024 [MB] (17 MBps) [2024-12-08T14:20:46.616Z] Copying: 97/1024 [MB] (19 MBps) [2024-12-08T14:20:47.560Z] Copying: 114/1024 [MB] (16 MBps) [2024-12-08T14:20:48.504Z] Copying: 133/1024 [MB] (19 MBps) [2024-12-08T14:20:49.447Z] Copying: 154/1024 [MB] (20 MBps) [2024-12-08T14:20:50.831Z] Copying: 165/1024 [MB] (10 MBps) [2024-12-08T14:20:51.770Z] Copying: 175/1024 [MB] (10 MBps) [2024-12-08T14:20:52.712Z] Copying: 185/1024 [MB] (10 MBps) [2024-12-08T14:20:53.652Z] Copying: 196/1024 [MB] (10 MBps) [2024-12-08T14:20:54.592Z] Copying: 207/1024 [MB] (10 MBps) [2024-12-08T14:20:55.533Z] Copying: 217/1024 [MB] (10 MBps) [2024-12-08T14:20:56.479Z] Copying: 228/1024 [MB] (10 MBps) [2024-12-08T14:20:57.424Z] Copying: 238/1024 [MB] (10 MBps) [2024-12-08T14:20:58.808Z] Copying: 254/1024 [MB] (16 MBps) [2024-12-08T14:20:59.750Z] Copying: 277/1024 [MB] (22 MBps) [2024-12-08T14:21:00.693Z] Copying: 301/1024 [MB] (24 MBps) [2024-12-08T14:21:01.635Z] Copying: 322/1024 [MB] (21 MBps) [2024-12-08T14:21:02.636Z] Copying: 345/1024 [MB] (22 MBps) [2024-12-08T14:21:03.575Z] Copying: 364/1024 [MB] (19 MBps) [2024-12-08T14:21:04.520Z] Copying: 384/1024 [MB] (19 MBps) [2024-12-08T14:21:05.465Z] Copying: 406/1024 [MB] (21 MBps) [2024-12-08T14:21:06.851Z] Copying: 426/1024 [MB] (20 MBps) [2024-12-08T14:21:07.426Z] Copying: 446/1024 [MB] (19 MBps) [2024-12-08T14:21:08.818Z] Copying: 460/1024 [MB] (13 MBps) [2024-12-08T14:21:09.876Z] Copying: 471/1024 [MB] (11 MBps) [2024-12-08T14:21:10.460Z] Copying: 491/1024 [MB] (20 MBps) [2024-12-08T14:21:11.849Z] Copying: 502/1024 [MB] (11 MBps) [2024-12-08T14:21:12.426Z] Copying: 513/1024 [MB] (10 MBps) [2024-12-08T14:21:13.815Z] Copying: 531/1024 [MB] (17 MBps) [2024-12-08T14:21:14.760Z] Copying: 544/1024 [MB] (13 MBps) [2024-12-08T14:21:15.703Z] Copying: 564/1024 [MB] (19 MBps) [2024-12-08T14:21:16.646Z] Copying: 581/1024 [MB] (17 MBps) [2024-12-08T14:21:17.591Z] Copying: 605/1024 [MB] (24 MBps) [2024-12-08T14:21:18.531Z] Copying: 625/1024 [MB] (19 MBps) [2024-12-08T14:21:19.528Z] Copying: 648/1024 [MB] (23 MBps) [2024-12-08T14:21:20.470Z] Copying: 668/1024 [MB] (19 MBps) [2024-12-08T14:21:21.857Z] Copying: 688/1024 [MB] (19 MBps) [2024-12-08T14:21:22.430Z] Copying: 698/1024 [MB] (10 MBps) [2024-12-08T14:21:23.820Z] Copying: 710/1024 [MB] (12 MBps) [2024-12-08T14:21:24.763Z] Copying: 722/1024 [MB] (11 MBps) [2024-12-08T14:21:25.706Z] Copying: 733/1024 [MB] (11 MBps) [2024-12-08T14:21:26.649Z] Copying: 746/1024 [MB] (12 MBps) [2024-12-08T14:21:27.596Z] Copying: 764/1024 [MB] (18 MBps) [2024-12-08T14:21:28.542Z] Copying: 776/1024 [MB] (12 MBps) [2024-12-08T14:21:29.487Z] Copying: 787/1024 [MB] (10 MBps) [2024-12-08T14:21:30.433Z] Copying: 803/1024 [MB] (16 MBps) [2024-12-08T14:21:31.822Z] Copying: 814/1024 [MB] (10 MBps) [2024-12-08T14:21:32.766Z] Copying: 831/1024 [MB] (17 MBps) [2024-12-08T14:21:33.709Z] Copying: 846/1024 [MB] (14 MBps) [2024-12-08T14:21:34.652Z] Copying: 868/1024 [MB] (21 MBps) [2024-12-08T14:21:35.592Z] Copying: 889/1024 [MB] (21 MBps) [2024-12-08T14:21:36.532Z] Copying: 909/1024 [MB] (20 MBps) [2024-12-08T14:21:37.488Z] Copying: 927/1024 [MB] (18 MBps) [2024-12-08T14:21:38.447Z] Copying: 944/1024 [MB] (16 MBps) [2024-12-08T14:21:39.833Z] Copying: 955/1024 [MB] (10 MBps) [2024-12-08T14:21:40.775Z] Copying: 970/1024 [MB] (15 MBps) [2024-12-08T14:21:41.717Z] Copying: 987/1024 [MB] (16 MBps) [2024-12-08T14:21:42.663Z] Copying: 1010/1024 [MB] (22 MBps) [2024-12-08T14:21:42.925Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-12-08 14:21:42.745923] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:40.005 [2024-12-08 14:21:42.746320] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:40.005 [2024-12-08 14:21:42.746407] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:40.005 [2024-12-08 14:21:42.746434] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.005 [2024-12-08 14:21:42.746483] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:40.005 [2024-12-08 14:21:42.749518] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:40.005 [2024-12-08 14:21:42.749684] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:40.005 [2024-12-08 14:21:42.749753] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.993 ms 00:26:40.005 [2024-12-08 14:21:42.749777] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.005 [2024-12-08 14:21:42.750780] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:40.005 [2024-12-08 14:21:42.750900] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:40.005 [2024-12-08 14:21:42.750963] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.960 ms 00:26:40.005 [2024-12-08 14:21:42.751007] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.005 [2024-12-08 14:21:42.754487] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:40.005 [2024-12-08 14:21:42.754575] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:40.005 [2024-12-08 14:21:42.754638] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.446 ms 00:26:40.005 [2024-12-08 14:21:42.754662] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.005 [2024-12-08 14:21:42.760849] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:40.005 [2024-12-08 14:21:42.761012] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:26:40.005 [2024-12-08 14:21:42.761163] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.149 ms 00:26:40.005 [2024-12-08 14:21:42.761714] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.005 [2024-12-08 14:21:42.791362] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:40.005 [2024-12-08 14:21:42.791524] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:40.005 [2024-12-08 14:21:42.791590] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.496 ms 00:26:40.005 [2024-12-08 14:21:42.791614] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.006 [2024-12-08 14:21:42.808400] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:40.006 [2024-12-08 14:21:42.808552] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:40.006 [2024-12-08 14:21:42.808622] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.543 ms 00:26:40.006 [2024-12-08 14:21:42.808654] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.006 [2024-12-08 14:21:42.819880] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:40.006 [2024-12-08 14:21:42.819926] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:40.006 [2024-12-08 14:21:42.819940] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.017 ms 00:26:40.006 [2024-12-08 14:21:42.819948] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.006 [2024-12-08 14:21:42.845467] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:40.006 [2024-12-08 14:21:42.845511] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:26:40.006 [2024-12-08 14:21:42.845521] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.502 ms 00:26:40.006 [2024-12-08 14:21:42.845529] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.006 [2024-12-08 14:21:42.871322] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:40.006 [2024-12-08 14:21:42.871363] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:26:40.006 [2024-12-08 14:21:42.871386] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.750 ms 00:26:40.006 [2024-12-08 14:21:42.871393] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.006 [2024-12-08 14:21:42.896429] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:40.006 [2024-12-08 14:21:42.896471] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:40.006 [2024-12-08 14:21:42.896482] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.992 ms 00:26:40.006 [2024-12-08 14:21:42.896490] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.006 [2024-12-08 14:21:42.921408] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:40.006 [2024-12-08 14:21:42.921450] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:40.006 [2024-12-08 14:21:42.921462] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.836 ms 00:26:40.006 [2024-12-08 14:21:42.921470] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.006 [2024-12-08 14:21:42.921511] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:40.006 [2024-12-08 14:21:42.921533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:40.006 [2024-12-08 14:21:42.921545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 4096 / 261120 wr_cnt: 1 state: open 00:26:40.006 [2024-12-08 14:21:42.921554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:40.006 [2024-12-08 14:21:42.921562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:40.006 [2024-12-08 14:21:42.921570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:40.006 [2024-12-08 14:21:42.921579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:40.006 [2024-12-08 14:21:42.921587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:40.006 [2024-12-08 14:21:42.921595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:40.006 [2024-12-08 14:21:42.921603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:40.006 [2024-12-08 14:21:42.921611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:40.006 [2024-12-08 14:21:42.921619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:40.006 [2024-12-08 14:21:42.921626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:40.006 [2024-12-08 14:21:42.921635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:40.006 [2024-12-08 14:21:42.921643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:40.006 [2024-12-08 14:21:42.921651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:40.006 [2024-12-08 14:21:42.921658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:40.006 [2024-12-08 14:21:42.921666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:40.006 [2024-12-08 14:21:42.921673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:40.006 [2024-12-08 14:21:42.921681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:40.006 [2024-12-08 14:21:42.921688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:40.006 [2024-12-08 14:21:42.921696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:40.006 [2024-12-08 14:21:42.921703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:40.006 [2024-12-08 14:21:42.921711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:40.006 [2024-12-08 14:21:42.921718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:40.006 [2024-12-08 14:21:42.921725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:40.006 [2024-12-08 14:21:42.921733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:40.006 [2024-12-08 14:21:42.921741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:40.006 [2024-12-08 14:21:42.921748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:40.006 [2024-12-08 14:21:42.921756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:40.006 [2024-12-08 14:21:42.921767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:40.006 [2024-12-08 14:21:42.921774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:40.006 [2024-12-08 14:21:42.921782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:40.006 [2024-12-08 14:21:42.921790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:40.006 [2024-12-08 14:21:42.921798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:40.006 [2024-12-08 14:21:42.921806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:40.006 [2024-12-08 14:21:42.921813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:40.006 [2024-12-08 14:21:42.921822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:40.006 [2024-12-08 14:21:42.921829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:40.006 [2024-12-08 14:21:42.921838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:40.006 [2024-12-08 14:21:42.921845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:40.006 [2024-12-08 14:21:42.921852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:40.006 [2024-12-08 14:21:42.921860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:40.006 [2024-12-08 14:21:42.921868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:40.006 [2024-12-08 14:21:42.921876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:40.006 [2024-12-08 14:21:42.921884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:40.006 [2024-12-08 14:21:42.921891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:40.006 [2024-12-08 14:21:42.921898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:40.006 [2024-12-08 14:21:42.921905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:40.006 [2024-12-08 14:21:42.921912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:40.006 [2024-12-08 14:21:42.921920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:40.006 [2024-12-08 14:21:42.921928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:40.006 [2024-12-08 14:21:42.921935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:40.006 [2024-12-08 14:21:42.921943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:40.006 [2024-12-08 14:21:42.921950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:40.006 [2024-12-08 14:21:42.921958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:40.006 [2024-12-08 14:21:42.921966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:40.006 [2024-12-08 14:21:42.921973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:40.006 [2024-12-08 14:21:42.922000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:40.006 [2024-12-08 14:21:42.922009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:40.006 [2024-12-08 14:21:42.922017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:40.006 [2024-12-08 14:21:42.922024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:40.006 [2024-12-08 14:21:42.922033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:40.006 [2024-12-08 14:21:42.922042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:40.006 [2024-12-08 14:21:42.922050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:40.006 [2024-12-08 14:21:42.922057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:40.007 [2024-12-08 14:21:42.922065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:40.007 [2024-12-08 14:21:42.922074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:40.007 [2024-12-08 14:21:42.922081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:40.007 [2024-12-08 14:21:42.922090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:40.007 [2024-12-08 14:21:42.922097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:40.007 [2024-12-08 14:21:42.922106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:40.007 [2024-12-08 14:21:42.922114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:40.007 [2024-12-08 14:21:42.922122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:40.007 [2024-12-08 14:21:42.922130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:40.007 [2024-12-08 14:21:42.922138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:40.007 [2024-12-08 14:21:42.922146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:40.007 [2024-12-08 14:21:42.922153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:40.007 [2024-12-08 14:21:42.922161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:40.007 [2024-12-08 14:21:42.922168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:40.007 [2024-12-08 14:21:42.922176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:40.007 [2024-12-08 14:21:42.922184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:40.007 [2024-12-08 14:21:42.922192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:40.007 [2024-12-08 14:21:42.922199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:40.007 [2024-12-08 14:21:42.922207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:40.007 [2024-12-08 14:21:42.922225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:40.007 [2024-12-08 14:21:42.922232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:40.007 [2024-12-08 14:21:42.922241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:40.007 [2024-12-08 14:21:42.922249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:40.007 [2024-12-08 14:21:42.922257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:40.007 [2024-12-08 14:21:42.922265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:40.007 [2024-12-08 14:21:42.922273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:40.007 [2024-12-08 14:21:42.922280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:40.007 [2024-12-08 14:21:42.922288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:40.007 [2024-12-08 14:21:42.922298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:40.007 [2024-12-08 14:21:42.922306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:40.007 [2024-12-08 14:21:42.922314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:40.007 [2024-12-08 14:21:42.922323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:40.007 [2024-12-08 14:21:42.922337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:40.007 [2024-12-08 14:21:42.922345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:40.007 [2024-12-08 14:21:42.922353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:40.007 [2024-12-08 14:21:42.922370] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:40.007 [2024-12-08 14:21:42.922378] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4613a08b-5f26-4463-9b36-017e8defea61 00:26:40.007 [2024-12-08 14:21:42.922387] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 265216 00:26:40.007 [2024-12-08 14:21:42.922395] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:40.007 [2024-12-08 14:21:42.922403] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:40.007 [2024-12-08 14:21:42.922411] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:40.007 [2024-12-08 14:21:42.922419] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:40.007 [2024-12-08 14:21:42.922428] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:40.007 [2024-12-08 14:21:42.922436] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:40.007 [2024-12-08 14:21:42.922450] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:40.007 [2024-12-08 14:21:42.922459] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:40.007 [2024-12-08 14:21:42.922467] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:40.007 [2024-12-08 14:21:42.922475] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:40.007 [2024-12-08 14:21:42.922487] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.957 ms 00:26:40.007 [2024-12-08 14:21:42.922495] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.269 [2024-12-08 14:21:42.936328] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:40.269 [2024-12-08 14:21:42.936488] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:40.269 [2024-12-08 14:21:42.936506] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.800 ms 00:26:40.269 [2024-12-08 14:21:42.936515] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.269 [2024-12-08 14:21:42.936753] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:40.269 [2024-12-08 14:21:42.936764] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:40.269 [2024-12-08 14:21:42.936774] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.200 ms 00:26:40.269 [2024-12-08 14:21:42.936782] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.269 [2024-12-08 14:21:42.975821] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:40.269 [2024-12-08 14:21:42.975970] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:40.269 [2024-12-08 14:21:42.976016] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:40.269 [2024-12-08 14:21:42.976026] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.269 [2024-12-08 14:21:42.976099] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:40.269 [2024-12-08 14:21:42.976108] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:40.269 [2024-12-08 14:21:42.976116] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:40.269 [2024-12-08 14:21:42.976124] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.269 [2024-12-08 14:21:42.976202] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:40.269 [2024-12-08 14:21:42.976213] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:40.269 [2024-12-08 14:21:42.976222] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:40.269 [2024-12-08 14:21:42.976230] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.269 [2024-12-08 14:21:42.976246] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:40.269 [2024-12-08 14:21:42.976259] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:40.269 [2024-12-08 14:21:42.976267] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:40.269 [2024-12-08 14:21:42.976274] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.269 [2024-12-08 14:21:43.056388] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:40.269 [2024-12-08 14:21:43.056440] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:40.269 [2024-12-08 14:21:43.056453] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:40.269 [2024-12-08 14:21:43.056462] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.269 [2024-12-08 14:21:43.088627] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:40.269 [2024-12-08 14:21:43.088678] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:40.269 [2024-12-08 14:21:43.088689] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:40.269 [2024-12-08 14:21:43.088697] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.269 [2024-12-08 14:21:43.088759] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:40.269 [2024-12-08 14:21:43.088769] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:40.269 [2024-12-08 14:21:43.088777] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:40.269 [2024-12-08 14:21:43.088785] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.269 [2024-12-08 14:21:43.088827] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:40.269 [2024-12-08 14:21:43.088837] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:40.269 [2024-12-08 14:21:43.088849] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:40.269 [2024-12-08 14:21:43.088857] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.269 [2024-12-08 14:21:43.088964] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:40.269 [2024-12-08 14:21:43.088974] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:40.269 [2024-12-08 14:21:43.089013] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:40.269 [2024-12-08 14:21:43.089022] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.269 [2024-12-08 14:21:43.089068] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:40.269 [2024-12-08 14:21:43.089078] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:40.269 [2024-12-08 14:21:43.089087] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:40.269 [2024-12-08 14:21:43.089098] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.269 [2024-12-08 14:21:43.089141] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:40.269 [2024-12-08 14:21:43.089151] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:40.269 [2024-12-08 14:21:43.089159] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:40.269 [2024-12-08 14:21:43.089167] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.269 [2024-12-08 14:21:43.089218] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:40.269 [2024-12-08 14:21:43.089228] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:40.269 [2024-12-08 14:21:43.089242] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:40.269 [2024-12-08 14:21:43.089251] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.269 [2024-12-08 14:21:43.089384] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 343.430 ms, result 0 00:26:41.215 00:26:41.215 00:26:41.215 14:21:43 -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:43.764 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:26:43.764 14:21:46 -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:26:43.764 14:21:46 -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:26:43.764 14:21:46 -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:43.764 14:21:46 -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:43.764 14:21:46 -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:26:43.764 14:21:46 -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:43.764 14:21:46 -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:43.764 Process with pid 75723 is not found 00:26:43.764 14:21:46 -- ftl/dirty_shutdown.sh@37 -- # killprocess 75723 00:26:43.764 14:21:46 -- common/autotest_common.sh@936 -- # '[' -z 75723 ']' 00:26:43.764 14:21:46 -- common/autotest_common.sh@940 -- # kill -0 75723 00:26:43.764 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (75723) - No such process 00:26:43.764 14:21:46 -- common/autotest_common.sh@963 -- # echo 'Process with pid 75723 is not found' 00:26:43.764 14:21:46 -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:26:44.025 Remove shared memory files 00:26:44.025 14:21:46 -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:26:44.025 14:21:46 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:44.025 14:21:46 -- ftl/common.sh@205 -- # rm -f rm -f 00:26:44.025 14:21:46 -- ftl/common.sh@206 -- # rm -f rm -f 00:26:44.025 14:21:46 -- ftl/common.sh@207 -- # rm -f rm -f 00:26:44.025 14:21:46 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:44.025 14:21:46 -- ftl/common.sh@209 -- # rm -f rm -f 00:26:44.025 ************************************ 00:26:44.025 END TEST ftl_dirty_shutdown 00:26:44.025 ************************************ 00:26:44.025 00:26:44.025 real 4m32.317s 00:26:44.025 user 5m0.042s 00:26:44.025 sys 0m27.747s 00:26:44.025 14:21:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:44.025 14:21:46 -- common/autotest_common.sh@10 -- # set +x 00:26:44.025 14:21:46 -- ftl/ftl.sh@79 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:07.0 0000:00:06.0 00:26:44.025 14:21:46 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:26:44.025 14:21:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:44.025 14:21:46 -- common/autotest_common.sh@10 -- # set +x 00:26:44.025 ************************************ 00:26:44.025 START TEST ftl_upgrade_shutdown 00:26:44.025 ************************************ 00:26:44.025 14:21:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:07.0 0000:00:06.0 00:26:44.287 * Looking for test storage... 00:26:44.287 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:44.287 14:21:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:44.287 14:21:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:44.287 14:21:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:44.287 14:21:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:44.287 14:21:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:44.287 14:21:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:44.287 14:21:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:44.287 14:21:47 -- scripts/common.sh@335 -- # IFS=.-: 00:26:44.287 14:21:47 -- scripts/common.sh@335 -- # read -ra ver1 00:26:44.287 14:21:47 -- scripts/common.sh@336 -- # IFS=.-: 00:26:44.287 14:21:47 -- scripts/common.sh@336 -- # read -ra ver2 00:26:44.287 14:21:47 -- scripts/common.sh@337 -- # local 'op=<' 00:26:44.287 14:21:47 -- scripts/common.sh@339 -- # ver1_l=2 00:26:44.287 14:21:47 -- scripts/common.sh@340 -- # ver2_l=1 00:26:44.287 14:21:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:44.287 14:21:47 -- scripts/common.sh@343 -- # case "$op" in 00:26:44.287 14:21:47 -- scripts/common.sh@344 -- # : 1 00:26:44.287 14:21:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:44.287 14:21:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:44.287 14:21:47 -- scripts/common.sh@364 -- # decimal 1 00:26:44.287 14:21:47 -- scripts/common.sh@352 -- # local d=1 00:26:44.287 14:21:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:44.287 14:21:47 -- scripts/common.sh@354 -- # echo 1 00:26:44.287 14:21:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:44.287 14:21:47 -- scripts/common.sh@365 -- # decimal 2 00:26:44.287 14:21:47 -- scripts/common.sh@352 -- # local d=2 00:26:44.287 14:21:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:44.287 14:21:47 -- scripts/common.sh@354 -- # echo 2 00:26:44.287 14:21:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:44.287 14:21:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:44.287 14:21:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:44.287 14:21:47 -- scripts/common.sh@367 -- # return 0 00:26:44.287 14:21:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:44.287 14:21:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:44.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:44.287 --rc genhtml_branch_coverage=1 00:26:44.287 --rc genhtml_function_coverage=1 00:26:44.287 --rc genhtml_legend=1 00:26:44.287 --rc geninfo_all_blocks=1 00:26:44.287 --rc geninfo_unexecuted_blocks=1 00:26:44.287 00:26:44.287 ' 00:26:44.287 14:21:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:44.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:44.287 --rc genhtml_branch_coverage=1 00:26:44.287 --rc genhtml_function_coverage=1 00:26:44.287 --rc genhtml_legend=1 00:26:44.287 --rc geninfo_all_blocks=1 00:26:44.287 --rc geninfo_unexecuted_blocks=1 00:26:44.287 00:26:44.287 ' 00:26:44.287 14:21:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:44.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:44.287 --rc genhtml_branch_coverage=1 00:26:44.287 --rc genhtml_function_coverage=1 00:26:44.287 --rc genhtml_legend=1 00:26:44.287 --rc geninfo_all_blocks=1 00:26:44.287 --rc geninfo_unexecuted_blocks=1 00:26:44.287 00:26:44.287 ' 00:26:44.287 14:21:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:44.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:44.287 --rc genhtml_branch_coverage=1 00:26:44.287 --rc genhtml_function_coverage=1 00:26:44.288 --rc genhtml_legend=1 00:26:44.288 --rc geninfo_all_blocks=1 00:26:44.288 --rc geninfo_unexecuted_blocks=1 00:26:44.288 00:26:44.288 ' 00:26:44.288 14:21:47 -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:44.288 14:21:47 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:26:44.288 14:21:47 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:44.288 14:21:47 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:44.288 14:21:47 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:44.288 14:21:47 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:44.288 14:21:47 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:44.288 14:21:47 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:44.288 14:21:47 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:44.288 14:21:47 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:44.288 14:21:47 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:44.288 14:21:47 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:44.288 14:21:47 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:44.288 14:21:47 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:44.288 14:21:47 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:44.288 14:21:47 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:44.288 14:21:47 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:44.288 14:21:47 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:44.288 14:21:47 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:44.288 14:21:47 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:44.288 14:21:47 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:44.288 14:21:47 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:44.288 14:21:47 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:44.288 14:21:47 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:44.288 14:21:47 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:44.288 14:21:47 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:44.288 14:21:47 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:44.288 14:21:47 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:44.288 14:21:47 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:44.288 14:21:47 -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:44.288 14:21:47 -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:26:44.288 14:21:47 -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:26:44.288 14:21:47 -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:07.0 00:26:44.288 14:21:47 -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:07.0 00:26:44.288 14:21:47 -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:26:44.288 14:21:47 -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:26:44.288 14:21:47 -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:06.0 00:26:44.288 14:21:47 -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:06.0 00:26:44.288 14:21:47 -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:26:44.288 14:21:47 -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:26:44.288 14:21:47 -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:26:44.288 14:21:47 -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:26:44.288 14:21:47 -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:26:44.288 14:21:47 -- ftl/common.sh@81 -- # local base_bdev= 00:26:44.288 14:21:47 -- ftl/common.sh@82 -- # local cache_bdev= 00:26:44.288 14:21:47 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:44.288 14:21:47 -- ftl/common.sh@89 -- # spdk_tgt_pid=78667 00:26:44.288 14:21:47 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:44.288 14:21:47 -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:26:44.288 14:21:47 -- ftl/common.sh@91 -- # waitforlisten 78667 00:26:44.288 14:21:47 -- common/autotest_common.sh@829 -- # '[' -z 78667 ']' 00:26:44.288 14:21:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:44.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:44.288 14:21:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:44.288 14:21:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:44.288 14:21:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:44.288 14:21:47 -- common/autotest_common.sh@10 -- # set +x 00:26:44.288 [2024-12-08 14:21:47.140221] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:44.288 [2024-12-08 14:21:47.141094] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78667 ] 00:26:44.549 [2024-12-08 14:21:47.295317] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:44.810 [2024-12-08 14:21:47.519941] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:44.810 [2024-12-08 14:21:47.520411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:45.753 14:21:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:45.753 14:21:48 -- common/autotest_common.sh@862 -- # return 0 00:26:45.753 14:21:48 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:45.753 14:21:48 -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:26:45.753 14:21:48 -- ftl/common.sh@99 -- # local params 00:26:45.753 14:21:48 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:46.015 14:21:48 -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:26:46.015 14:21:48 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:46.015 14:21:48 -- ftl/common.sh@101 -- # [[ -z 0000:00:07.0 ]] 00:26:46.015 14:21:48 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:46.015 14:21:48 -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:26:46.015 14:21:48 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:46.015 14:21:48 -- ftl/common.sh@101 -- # [[ -z 0000:00:06.0 ]] 00:26:46.015 14:21:48 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:46.015 14:21:48 -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:26:46.015 14:21:48 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:46.015 14:21:48 -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:26:46.015 14:21:48 -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:07.0 20480 00:26:46.015 14:21:48 -- ftl/common.sh@54 -- # local name=base 00:26:46.015 14:21:48 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:26:46.015 14:21:48 -- ftl/common.sh@56 -- # local size=20480 00:26:46.015 14:21:48 -- ftl/common.sh@59 -- # local base_bdev 00:26:46.015 14:21:48 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:07.0 00:26:46.276 14:21:48 -- ftl/common.sh@60 -- # base_bdev=basen1 00:26:46.276 14:21:48 -- ftl/common.sh@62 -- # local base_size 00:26:46.276 14:21:48 -- ftl/common.sh@63 -- # get_bdev_size basen1 00:26:46.276 14:21:48 -- common/autotest_common.sh@1367 -- # local bdev_name=basen1 00:26:46.276 14:21:48 -- common/autotest_common.sh@1368 -- # local bdev_info 00:26:46.276 14:21:48 -- common/autotest_common.sh@1369 -- # local bs 00:26:46.276 14:21:48 -- common/autotest_common.sh@1370 -- # local nb 00:26:46.276 14:21:48 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:26:46.276 14:21:49 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:26:46.276 { 00:26:46.276 "name": "basen1", 00:26:46.276 "aliases": [ 00:26:46.276 "94aded88-cc04-47f4-800d-dc8d2008df1b" 00:26:46.276 ], 00:26:46.276 "product_name": "NVMe disk", 00:26:46.276 "block_size": 4096, 00:26:46.276 "num_blocks": 1310720, 00:26:46.276 "uuid": "94aded88-cc04-47f4-800d-dc8d2008df1b", 00:26:46.276 "assigned_rate_limits": { 00:26:46.276 "rw_ios_per_sec": 0, 00:26:46.276 "rw_mbytes_per_sec": 0, 00:26:46.276 "r_mbytes_per_sec": 0, 00:26:46.276 "w_mbytes_per_sec": 0 00:26:46.276 }, 00:26:46.276 "claimed": true, 00:26:46.276 "claim_type": "read_many_write_one", 00:26:46.276 "zoned": false, 00:26:46.276 "supported_io_types": { 00:26:46.276 "read": true, 00:26:46.276 "write": true, 00:26:46.276 "unmap": true, 00:26:46.276 "write_zeroes": true, 00:26:46.276 "flush": true, 00:26:46.276 "reset": true, 00:26:46.276 "compare": true, 00:26:46.276 "compare_and_write": false, 00:26:46.276 "abort": true, 00:26:46.276 "nvme_admin": true, 00:26:46.276 "nvme_io": true 00:26:46.276 }, 00:26:46.276 "driver_specific": { 00:26:46.276 "nvme": [ 00:26:46.276 { 00:26:46.276 "pci_address": "0000:00:07.0", 00:26:46.276 "trid": { 00:26:46.276 "trtype": "PCIe", 00:26:46.276 "traddr": "0000:00:07.0" 00:26:46.276 }, 00:26:46.276 "ctrlr_data": { 00:26:46.276 "cntlid": 0, 00:26:46.276 "vendor_id": "0x1b36", 00:26:46.276 "model_number": "QEMU NVMe Ctrl", 00:26:46.276 "serial_number": "12341", 00:26:46.276 "firmware_revision": "8.0.0", 00:26:46.276 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:46.276 "oacs": { 00:26:46.276 "security": 0, 00:26:46.276 "format": 1, 00:26:46.276 "firmware": 0, 00:26:46.276 "ns_manage": 1 00:26:46.276 }, 00:26:46.276 "multi_ctrlr": false, 00:26:46.276 "ana_reporting": false 00:26:46.276 }, 00:26:46.276 "vs": { 00:26:46.276 "nvme_version": "1.4" 00:26:46.276 }, 00:26:46.276 "ns_data": { 00:26:46.276 "id": 1, 00:26:46.276 "can_share": false 00:26:46.276 } 00:26:46.276 } 00:26:46.276 ], 00:26:46.276 "mp_policy": "active_passive" 00:26:46.276 } 00:26:46.276 } 00:26:46.276 ]' 00:26:46.276 14:21:49 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:26:46.537 14:21:49 -- common/autotest_common.sh@1372 -- # bs=4096 00:26:46.537 14:21:49 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:26:46.537 14:21:49 -- common/autotest_common.sh@1373 -- # nb=1310720 00:26:46.537 14:21:49 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:26:46.537 14:21:49 -- common/autotest_common.sh@1377 -- # echo 5120 00:26:46.537 14:21:49 -- ftl/common.sh@63 -- # base_size=5120 00:26:46.537 14:21:49 -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:26:46.537 14:21:49 -- ftl/common.sh@67 -- # clear_lvols 00:26:46.537 14:21:49 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:46.537 14:21:49 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:46.537 14:21:49 -- ftl/common.sh@28 -- # stores=af278f2b-5926-4740-98c8-c5a364c00fd6 00:26:46.537 14:21:49 -- ftl/common.sh@29 -- # for lvs in $stores 00:26:46.537 14:21:49 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u af278f2b-5926-4740-98c8-c5a364c00fd6 00:26:46.797 14:21:49 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:26:47.057 14:21:49 -- ftl/common.sh@68 -- # lvs=fb53ebd3-1608-4f07-ba4b-b017f970ae58 00:26:47.057 14:21:49 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u fb53ebd3-1608-4f07-ba4b-b017f970ae58 00:26:47.316 14:21:50 -- ftl/common.sh@107 -- # base_bdev=5e614eff-c0dd-4bc9-abbd-f2ae8fbcecf5 00:26:47.316 14:21:50 -- ftl/common.sh@108 -- # [[ -z 5e614eff-c0dd-4bc9-abbd-f2ae8fbcecf5 ]] 00:26:47.316 14:21:50 -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:06.0 5e614eff-c0dd-4bc9-abbd-f2ae8fbcecf5 5120 00:26:47.316 14:21:50 -- ftl/common.sh@35 -- # local name=cache 00:26:47.316 14:21:50 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:26:47.316 14:21:50 -- ftl/common.sh@37 -- # local base_bdev=5e614eff-c0dd-4bc9-abbd-f2ae8fbcecf5 00:26:47.316 14:21:50 -- ftl/common.sh@38 -- # local cache_size=5120 00:26:47.316 14:21:50 -- ftl/common.sh@41 -- # get_bdev_size 5e614eff-c0dd-4bc9-abbd-f2ae8fbcecf5 00:26:47.316 14:21:50 -- common/autotest_common.sh@1367 -- # local bdev_name=5e614eff-c0dd-4bc9-abbd-f2ae8fbcecf5 00:26:47.316 14:21:50 -- common/autotest_common.sh@1368 -- # local bdev_info 00:26:47.316 14:21:50 -- common/autotest_common.sh@1369 -- # local bs 00:26:47.316 14:21:50 -- common/autotest_common.sh@1370 -- # local nb 00:26:47.316 14:21:50 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5e614eff-c0dd-4bc9-abbd-f2ae8fbcecf5 00:26:47.574 14:21:50 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:26:47.574 { 00:26:47.574 "name": "5e614eff-c0dd-4bc9-abbd-f2ae8fbcecf5", 00:26:47.574 "aliases": [ 00:26:47.574 "lvs/basen1p0" 00:26:47.574 ], 00:26:47.574 "product_name": "Logical Volume", 00:26:47.574 "block_size": 4096, 00:26:47.574 "num_blocks": 5242880, 00:26:47.574 "uuid": "5e614eff-c0dd-4bc9-abbd-f2ae8fbcecf5", 00:26:47.574 "assigned_rate_limits": { 00:26:47.574 "rw_ios_per_sec": 0, 00:26:47.574 "rw_mbytes_per_sec": 0, 00:26:47.574 "r_mbytes_per_sec": 0, 00:26:47.574 "w_mbytes_per_sec": 0 00:26:47.574 }, 00:26:47.574 "claimed": false, 00:26:47.574 "zoned": false, 00:26:47.574 "supported_io_types": { 00:26:47.574 "read": true, 00:26:47.574 "write": true, 00:26:47.574 "unmap": true, 00:26:47.574 "write_zeroes": true, 00:26:47.574 "flush": false, 00:26:47.574 "reset": true, 00:26:47.574 "compare": false, 00:26:47.574 "compare_and_write": false, 00:26:47.574 "abort": false, 00:26:47.574 "nvme_admin": false, 00:26:47.574 "nvme_io": false 00:26:47.574 }, 00:26:47.574 "driver_specific": { 00:26:47.574 "lvol": { 00:26:47.574 "lvol_store_uuid": "fb53ebd3-1608-4f07-ba4b-b017f970ae58", 00:26:47.574 "base_bdev": "basen1", 00:26:47.574 "thin_provision": true, 00:26:47.574 "snapshot": false, 00:26:47.574 "clone": false, 00:26:47.574 "esnap_clone": false 00:26:47.574 } 00:26:47.574 } 00:26:47.574 } 00:26:47.574 ]' 00:26:47.574 14:21:50 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:26:47.574 14:21:50 -- common/autotest_common.sh@1372 -- # bs=4096 00:26:47.574 14:21:50 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:26:47.574 14:21:50 -- common/autotest_common.sh@1373 -- # nb=5242880 00:26:47.574 14:21:50 -- common/autotest_common.sh@1376 -- # bdev_size=20480 00:26:47.574 14:21:50 -- common/autotest_common.sh@1377 -- # echo 20480 00:26:47.574 14:21:50 -- ftl/common.sh@41 -- # local base_size=1024 00:26:47.574 14:21:50 -- ftl/common.sh@44 -- # local nvc_bdev 00:26:47.574 14:21:50 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:06.0 00:26:47.832 14:21:50 -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:26:47.832 14:21:50 -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:26:47.832 14:21:50 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:26:48.092 14:21:50 -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:26:48.092 14:21:50 -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:26:48.092 14:21:50 -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 5e614eff-c0dd-4bc9-abbd-f2ae8fbcecf5 -c cachen1p0 --l2p_dram_limit 2 00:26:48.092 [2024-12-08 14:21:50.927133] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.092 [2024-12-08 14:21:50.927172] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:48.092 [2024-12-08 14:21:50.927184] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:26:48.092 [2024-12-08 14:21:50.927192] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.092 [2024-12-08 14:21:50.927231] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.092 [2024-12-08 14:21:50.927238] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:48.092 [2024-12-08 14:21:50.927246] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:26:48.092 [2024-12-08 14:21:50.927251] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.092 [2024-12-08 14:21:50.927268] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:48.092 [2024-12-08 14:21:50.927832] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:48.092 [2024-12-08 14:21:50.927846] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.092 [2024-12-08 14:21:50.927852] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:48.092 [2024-12-08 14:21:50.927861] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.581 ms 00:26:48.092 [2024-12-08 14:21:50.927867] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.092 [2024-12-08 14:21:50.927892] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID d407888f-f904-40e3-9a0a-82db9f134e12 00:26:48.092 [2024-12-08 14:21:50.928857] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.092 [2024-12-08 14:21:50.928889] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:26:48.092 [2024-12-08 14:21:50.928897] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:26:48.092 [2024-12-08 14:21:50.928904] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.092 [2024-12-08 14:21:50.933642] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.092 [2024-12-08 14:21:50.933671] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:48.092 [2024-12-08 14:21:50.933679] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.681 ms 00:26:48.092 [2024-12-08 14:21:50.933686] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.092 [2024-12-08 14:21:50.933715] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.092 [2024-12-08 14:21:50.933723] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:48.092 [2024-12-08 14:21:50.933729] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:26:48.092 [2024-12-08 14:21:50.933737] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.092 [2024-12-08 14:21:50.933773] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.092 [2024-12-08 14:21:50.933784] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:48.092 [2024-12-08 14:21:50.933791] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:48.092 [2024-12-08 14:21:50.933798] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.092 [2024-12-08 14:21:50.933816] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:48.092 [2024-12-08 14:21:50.936746] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.092 [2024-12-08 14:21:50.936769] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:48.092 [2024-12-08 14:21:50.936778] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.933 ms 00:26:48.092 [2024-12-08 14:21:50.936784] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.092 [2024-12-08 14:21:50.936806] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.092 [2024-12-08 14:21:50.936812] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:48.092 [2024-12-08 14:21:50.936819] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:48.092 [2024-12-08 14:21:50.936825] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.092 [2024-12-08 14:21:50.936845] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:26:48.092 [2024-12-08 14:21:50.936932] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:26:48.092 [2024-12-08 14:21:50.936944] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:48.092 [2024-12-08 14:21:50.936952] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:26:48.092 [2024-12-08 14:21:50.936961] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:48.092 [2024-12-08 14:21:50.936968] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:48.092 [2024-12-08 14:21:50.936976] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:48.092 [2024-12-08 14:21:50.936995] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:48.092 [2024-12-08 14:21:50.937003] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:26:48.092 [2024-12-08 14:21:50.937008] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:26:48.092 [2024-12-08 14:21:50.937016] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.092 [2024-12-08 14:21:50.937027] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:48.092 [2024-12-08 14:21:50.937034] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.172 ms 00:26:48.092 [2024-12-08 14:21:50.937054] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.092 [2024-12-08 14:21:50.937104] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.092 [2024-12-08 14:21:50.937110] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:48.092 [2024-12-08 14:21:50.937118] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:26:48.092 [2024-12-08 14:21:50.937124] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.092 [2024-12-08 14:21:50.937182] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:48.092 [2024-12-08 14:21:50.937189] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:48.092 [2024-12-08 14:21:50.937196] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:48.092 [2024-12-08 14:21:50.937202] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:48.092 [2024-12-08 14:21:50.937209] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:48.092 [2024-12-08 14:21:50.937214] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:48.092 [2024-12-08 14:21:50.937221] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:48.092 [2024-12-08 14:21:50.937226] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:48.092 [2024-12-08 14:21:50.937232] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:48.092 [2024-12-08 14:21:50.937237] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:48.092 [2024-12-08 14:21:50.937243] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:48.092 [2024-12-08 14:21:50.937248] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:48.092 [2024-12-08 14:21:50.937255] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:48.092 [2024-12-08 14:21:50.937260] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:48.092 [2024-12-08 14:21:50.937267] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:26:48.092 [2024-12-08 14:21:50.937272] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:48.092 [2024-12-08 14:21:50.937279] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:48.092 [2024-12-08 14:21:50.937284] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:26:48.092 [2024-12-08 14:21:50.937290] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:48.092 [2024-12-08 14:21:50.937295] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:26:48.092 [2024-12-08 14:21:50.937301] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:26:48.093 [2024-12-08 14:21:50.937306] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:26:48.093 [2024-12-08 14:21:50.937312] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:48.093 [2024-12-08 14:21:50.937317] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:48.093 [2024-12-08 14:21:50.937323] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:48.093 [2024-12-08 14:21:50.937328] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:48.093 [2024-12-08 14:21:50.937334] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:26:48.093 [2024-12-08 14:21:50.937339] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:48.093 [2024-12-08 14:21:50.937345] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:48.093 [2024-12-08 14:21:50.937350] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:48.093 [2024-12-08 14:21:50.937356] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:48.093 [2024-12-08 14:21:50.937361] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:48.093 [2024-12-08 14:21:50.937369] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:26:48.093 [2024-12-08 14:21:50.937374] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:48.093 [2024-12-08 14:21:50.937380] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:48.093 [2024-12-08 14:21:50.937385] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:48.093 [2024-12-08 14:21:50.937391] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:48.093 [2024-12-08 14:21:50.937396] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:48.093 [2024-12-08 14:21:50.937403] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:26:48.093 [2024-12-08 14:21:50.937408] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:48.093 [2024-12-08 14:21:50.937413] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:48.093 [2024-12-08 14:21:50.937419] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:48.093 [2024-12-08 14:21:50.937425] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:48.093 [2024-12-08 14:21:50.937430] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:48.093 [2024-12-08 14:21:50.937439] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:48.093 [2024-12-08 14:21:50.937444] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:48.093 [2024-12-08 14:21:50.937451] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:48.093 [2024-12-08 14:21:50.937456] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:48.093 [2024-12-08 14:21:50.937464] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:48.093 [2024-12-08 14:21:50.937469] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:48.093 [2024-12-08 14:21:50.937476] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:48.093 [2024-12-08 14:21:50.937483] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:48.093 [2024-12-08 14:21:50.937490] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:48.093 [2024-12-08 14:21:50.937497] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:26:48.093 [2024-12-08 14:21:50.937503] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:26:48.093 [2024-12-08 14:21:50.937508] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:26:48.093 [2024-12-08 14:21:50.937515] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:26:48.093 [2024-12-08 14:21:50.937520] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:26:48.093 [2024-12-08 14:21:50.937527] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:26:48.093 [2024-12-08 14:21:50.937532] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:26:48.093 [2024-12-08 14:21:50.937539] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:26:48.093 [2024-12-08 14:21:50.937545] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:26:48.093 [2024-12-08 14:21:50.937551] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:26:48.093 [2024-12-08 14:21:50.937556] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:26:48.093 [2024-12-08 14:21:50.937567] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:26:48.093 [2024-12-08 14:21:50.937573] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:48.093 [2024-12-08 14:21:50.937580] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:48.093 [2024-12-08 14:21:50.937586] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:48.093 [2024-12-08 14:21:50.937593] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:48.093 [2024-12-08 14:21:50.937598] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:48.093 [2024-12-08 14:21:50.937605] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:48.093 [2024-12-08 14:21:50.937610] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.093 [2024-12-08 14:21:50.937617] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:48.093 [2024-12-08 14:21:50.937622] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.465 ms 00:26:48.093 [2024-12-08 14:21:50.937629] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.093 [2024-12-08 14:21:50.949330] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.093 [2024-12-08 14:21:50.949361] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:48.093 [2024-12-08 14:21:50.949369] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 11.672 ms 00:26:48.093 [2024-12-08 14:21:50.949376] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.093 [2024-12-08 14:21:50.949406] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.093 [2024-12-08 14:21:50.949414] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:48.093 [2024-12-08 14:21:50.949422] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:26:48.093 [2024-12-08 14:21:50.949429] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.093 [2024-12-08 14:21:50.973251] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.093 [2024-12-08 14:21:50.973280] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:48.093 [2024-12-08 14:21:50.973288] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 23.790 ms 00:26:48.093 [2024-12-08 14:21:50.973296] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.093 [2024-12-08 14:21:50.973320] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.093 [2024-12-08 14:21:50.973328] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:48.093 [2024-12-08 14:21:50.973335] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:48.093 [2024-12-08 14:21:50.973343] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.093 [2024-12-08 14:21:50.973662] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.093 [2024-12-08 14:21:50.973677] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:48.093 [2024-12-08 14:21:50.973684] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.284 ms 00:26:48.093 [2024-12-08 14:21:50.973691] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.093 [2024-12-08 14:21:50.973725] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.093 [2024-12-08 14:21:50.973735] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:48.093 [2024-12-08 14:21:50.973742] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:26:48.093 [2024-12-08 14:21:50.973748] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.093 [2024-12-08 14:21:50.985701] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.093 [2024-12-08 14:21:50.985825] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:48.093 [2024-12-08 14:21:50.985838] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 11.940 ms 00:26:48.093 [2024-12-08 14:21:50.985846] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.093 [2024-12-08 14:21:50.995002] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:48.093 [2024-12-08 14:21:50.995722] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.093 [2024-12-08 14:21:50.995746] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:48.093 [2024-12-08 14:21:50.995754] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.815 ms 00:26:48.093 [2024-12-08 14:21:50.995760] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.351 [2024-12-08 14:21:51.018046] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.351 [2024-12-08 14:21:51.018074] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:26:48.351 [2024-12-08 14:21:51.018084] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 22.264 ms 00:26:48.351 [2024-12-08 14:21:51.018091] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.351 [2024-12-08 14:21:51.018124] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] First startup needs to scrub nv cache data region, this may take some time. 00:26:48.351 [2024-12-08 14:21:51.018132] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 4GiB 00:26:52.553 [2024-12-08 14:21:54.859652] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:52.553 [2024-12-08 14:21:54.859839] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:26:52.553 [2024-12-08 14:21:54.859866] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3841.509 ms 00:26:52.553 [2024-12-08 14:21:54.859875] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.553 [2024-12-08 14:21:54.859999] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:52.553 [2024-12-08 14:21:54.860012] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:52.553 [2024-12-08 14:21:54.860026] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:26:52.553 [2024-12-08 14:21:54.860033] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.553 [2024-12-08 14:21:54.884091] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:52.553 [2024-12-08 14:21:54.884127] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:26:52.553 [2024-12-08 14:21:54.884141] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 24.010 ms 00:26:52.553 [2024-12-08 14:21:54.884149] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.553 [2024-12-08 14:21:54.907732] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:52.553 [2024-12-08 14:21:54.907776] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:26:52.553 [2024-12-08 14:21:54.907791] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 23.540 ms 00:26:52.553 [2024-12-08 14:21:54.907798] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.553 [2024-12-08 14:21:54.908136] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:52.553 [2024-12-08 14:21:54.908146] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:52.553 [2024-12-08 14:21:54.908157] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.300 ms 00:26:52.553 [2024-12-08 14:21:54.908164] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.553 [2024-12-08 14:21:54.975763] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:52.553 [2024-12-08 14:21:54.975811] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:26:52.553 [2024-12-08 14:21:54.975827] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 67.559 ms 00:26:52.553 [2024-12-08 14:21:54.975836] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.553 [2024-12-08 14:21:55.003069] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:52.553 [2024-12-08 14:21:55.003122] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:26:52.553 [2024-12-08 14:21:55.003137] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 27.175 ms 00:26:52.553 [2024-12-08 14:21:55.003146] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.553 [2024-12-08 14:21:55.004878] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:52.553 [2024-12-08 14:21:55.004929] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:26:52.553 [2024-12-08 14:21:55.004945] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.666 ms 00:26:52.553 [2024-12-08 14:21:55.004954] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.553 [2024-12-08 14:21:55.031760] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:52.553 [2024-12-08 14:21:55.031811] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:26:52.553 [2024-12-08 14:21:55.031827] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 26.733 ms 00:26:52.553 [2024-12-08 14:21:55.031835] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.553 [2024-12-08 14:21:55.031893] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:52.553 [2024-12-08 14:21:55.031903] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:52.553 [2024-12-08 14:21:55.031914] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:52.553 [2024-12-08 14:21:55.031922] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.553 [2024-12-08 14:21:55.032044] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:52.553 [2024-12-08 14:21:55.032056] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:52.554 [2024-12-08 14:21:55.032067] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:26:52.554 [2024-12-08 14:21:55.032075] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.554 [2024-12-08 14:21:55.033260] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4105.569 ms, result 0 00:26:52.554 { 00:26:52.554 "name": "ftl", 00:26:52.554 "uuid": "d407888f-f904-40e3-9a0a-82db9f134e12" 00:26:52.554 } 00:26:52.554 14:21:55 -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:26:52.554 [2024-12-08 14:21:55.244365] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:52.554 14:21:55 -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:26:52.815 14:21:55 -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:26:52.815 [2024-12-08 14:21:55.660830] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:26:52.815 14:21:55 -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:26:53.076 [2024-12-08 14:21:55.862311] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:53.076 14:21:55 -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:26:53.337 Fill FTL, iteration 1 00:26:53.337 14:21:56 -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:26:53.337 14:21:56 -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:26:53.337 14:21:56 -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:26:53.337 14:21:56 -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:26:53.337 14:21:56 -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:26:53.337 14:21:56 -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:26:53.337 14:21:56 -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:26:53.337 14:21:56 -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:26:53.337 14:21:56 -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:26:53.337 14:21:56 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:53.337 14:21:56 -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:26:53.337 14:21:56 -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:26:53.337 14:21:56 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:53.338 14:21:56 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:53.338 14:21:56 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:53.338 14:21:56 -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:26:53.338 14:21:56 -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:26:53.338 14:21:56 -- ftl/common.sh@163 -- # spdk_ini_pid=78804 00:26:53.338 14:21:56 -- ftl/common.sh@164 -- # export spdk_ini_pid 00:26:53.338 14:21:56 -- ftl/common.sh@165 -- # waitforlisten 78804 /var/tmp/spdk.tgt.sock 00:26:53.338 14:21:56 -- common/autotest_common.sh@829 -- # '[' -z 78804 ']' 00:26:53.338 14:21:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:26:53.338 14:21:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:53.338 14:21:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:26:53.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:26:53.338 14:21:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:53.338 14:21:56 -- common/autotest_common.sh@10 -- # set +x 00:26:53.599 [2024-12-08 14:21:56.270087] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:53.599 [2024-12-08 14:21:56.270466] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78804 ] 00:26:53.599 [2024-12-08 14:21:56.423387] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:53.861 [2024-12-08 14:21:56.648621] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:53.861 [2024-12-08 14:21:56.649095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:55.247 14:21:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:55.247 14:21:57 -- common/autotest_common.sh@862 -- # return 0 00:26:55.247 14:21:57 -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:26:55.247 ftln1 00:26:55.247 14:21:58 -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:26:55.247 14:21:58 -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:26:55.508 14:21:58 -- ftl/common.sh@173 -- # echo ']}' 00:26:55.508 14:21:58 -- ftl/common.sh@176 -- # killprocess 78804 00:26:55.508 14:21:58 -- common/autotest_common.sh@936 -- # '[' -z 78804 ']' 00:26:55.508 14:21:58 -- common/autotest_common.sh@940 -- # kill -0 78804 00:26:55.508 14:21:58 -- common/autotest_common.sh@941 -- # uname 00:26:55.508 14:21:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:55.508 14:21:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78804 00:26:55.508 killing process with pid 78804 00:26:55.508 14:21:58 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:55.508 14:21:58 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:55.508 14:21:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78804' 00:26:55.508 14:21:58 -- common/autotest_common.sh@955 -- # kill 78804 00:26:55.508 14:21:58 -- common/autotest_common.sh@960 -- # wait 78804 00:26:56.886 14:21:59 -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:26:56.886 14:21:59 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:26:56.886 [2024-12-08 14:21:59.579582] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:56.886 [2024-12-08 14:21:59.579684] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78852 ] 00:26:56.886 [2024-12-08 14:21:59.724693] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:57.145 [2024-12-08 14:21:59.883293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:58.531  [2024-12-08T14:22:02.393Z] Copying: 237/1024 [MB] (237 MBps) [2024-12-08T14:22:03.338Z] Copying: 472/1024 [MB] (235 MBps) [2024-12-08T14:22:04.280Z] Copying: 710/1024 [MB] (238 MBps) [2024-12-08T14:22:04.853Z] Copying: 946/1024 [MB] (236 MBps) [2024-12-08T14:22:05.424Z] Copying: 1024/1024 [MB] (average 235 MBps) 00:27:02.504 00:27:02.504 Calculate MD5 checksum, iteration 1 00:27:02.504 14:22:05 -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:27:02.504 14:22:05 -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:27:02.504 14:22:05 -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:02.504 14:22:05 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:02.504 14:22:05 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:02.504 14:22:05 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:02.504 14:22:05 -- ftl/common.sh@154 -- # return 0 00:27:02.504 14:22:05 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:02.504 [2024-12-08 14:22:05.309124] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:02.504 [2024-12-08 14:22:05.309232] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78916 ] 00:27:02.763 [2024-12-08 14:22:05.453183] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.763 [2024-12-08 14:22:05.616499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:04.142  [2024-12-08T14:22:07.732Z] Copying: 645/1024 [MB] (645 MBps) [2024-12-08T14:22:08.301Z] Copying: 1024/1024 [MB] (average 642 MBps) 00:27:05.381 00:27:05.381 14:22:08 -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:27:05.381 14:22:08 -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:07.922 14:22:10 -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:07.922 Fill FTL, iteration 2 00:27:07.922 14:22:10 -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=5938fcbd59e818f8bb72219552c2bd7e 00:27:07.922 14:22:10 -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:07.922 14:22:10 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:07.922 14:22:10 -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:27:07.922 14:22:10 -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:07.922 14:22:10 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:07.922 14:22:10 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:07.922 14:22:10 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:07.922 14:22:10 -- ftl/common.sh@154 -- # return 0 00:27:07.922 14:22:10 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:07.922 [2024-12-08 14:22:10.527157] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:07.922 [2024-12-08 14:22:10.527829] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78973 ] 00:27:07.922 [2024-12-08 14:22:10.675653] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:08.180 [2024-12-08 14:22:10.841805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:09.569  [2024-12-08T14:22:13.433Z] Copying: 240/1024 [MB] (240 MBps) [2024-12-08T14:22:14.374Z] Copying: 483/1024 [MB] (243 MBps) [2024-12-08T14:22:15.315Z] Copying: 729/1024 [MB] (246 MBps) [2024-12-08T14:22:15.572Z] Copying: 971/1024 [MB] (242 MBps) [2024-12-08T14:22:16.137Z] Copying: 1024/1024 [MB] (average 243 MBps) 00:27:13.217 00:27:13.217 Calculate MD5 checksum, iteration 2 00:27:13.217 14:22:16 -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:27:13.217 14:22:16 -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:27:13.217 14:22:16 -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:13.217 14:22:16 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:13.217 14:22:16 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:13.217 14:22:16 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:13.217 14:22:16 -- ftl/common.sh@154 -- # return 0 00:27:13.217 14:22:16 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:13.217 [2024-12-08 14:22:16.117176] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:13.217 [2024-12-08 14:22:16.117270] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79039 ] 00:27:13.475 [2024-12-08 14:22:16.260132] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:13.734 [2024-12-08 14:22:16.429189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:15.115  [2024-12-08T14:22:18.609Z] Copying: 631/1024 [MB] (631 MBps) [2024-12-08T14:22:19.552Z] Copying: 1024/1024 [MB] (average 636 MBps) 00:27:16.632 00:27:16.632 14:22:19 -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:27:16.632 14:22:19 -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:19.182 14:22:21 -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:19.182 14:22:21 -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=8256785e9e1bfa4c53835981f9c0f5b5 00:27:19.182 14:22:21 -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:19.182 14:22:21 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:19.182 14:22:21 -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:19.182 [2024-12-08 14:22:21.805969] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:19.183 [2024-12-08 14:22:21.806018] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:19.183 [2024-12-08 14:22:21.806028] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:19.183 [2024-12-08 14:22:21.806038] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:19.183 [2024-12-08 14:22:21.806057] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:19.183 [2024-12-08 14:22:21.806064] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:19.183 [2024-12-08 14:22:21.806070] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:19.183 [2024-12-08 14:22:21.806076] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:19.183 [2024-12-08 14:22:21.806091] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:19.183 [2024-12-08 14:22:21.806097] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:19.183 [2024-12-08 14:22:21.806108] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:19.183 [2024-12-08 14:22:21.806114] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:19.183 [2024-12-08 14:22:21.806163] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.186 ms, result 0 00:27:19.183 true 00:27:19.183 14:22:21 -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:19.183 { 00:27:19.183 "name": "ftl", 00:27:19.183 "properties": [ 00:27:19.183 { 00:27:19.183 "name": "superblock_version", 00:27:19.183 "value": 5, 00:27:19.183 "read-only": true 00:27:19.183 }, 00:27:19.183 { 00:27:19.183 "name": "base_device", 00:27:19.183 "bands": [ 00:27:19.183 { 00:27:19.183 "id": 0, 00:27:19.183 "state": "FREE", 00:27:19.183 "validity": 0.0 00:27:19.183 }, 00:27:19.183 { 00:27:19.183 "id": 1, 00:27:19.183 "state": "FREE", 00:27:19.183 "validity": 0.0 00:27:19.183 }, 00:27:19.183 { 00:27:19.183 "id": 2, 00:27:19.183 "state": "FREE", 00:27:19.183 "validity": 0.0 00:27:19.183 }, 00:27:19.183 { 00:27:19.183 "id": 3, 00:27:19.183 "state": "FREE", 00:27:19.183 "validity": 0.0 00:27:19.183 }, 00:27:19.183 { 00:27:19.183 "id": 4, 00:27:19.183 "state": "FREE", 00:27:19.183 "validity": 0.0 00:27:19.183 }, 00:27:19.183 { 00:27:19.183 "id": 5, 00:27:19.183 "state": "FREE", 00:27:19.183 "validity": 0.0 00:27:19.183 }, 00:27:19.183 { 00:27:19.183 "id": 6, 00:27:19.183 "state": "FREE", 00:27:19.183 "validity": 0.0 00:27:19.183 }, 00:27:19.183 { 00:27:19.183 "id": 7, 00:27:19.183 "state": "FREE", 00:27:19.183 "validity": 0.0 00:27:19.183 }, 00:27:19.183 { 00:27:19.183 "id": 8, 00:27:19.183 "state": "FREE", 00:27:19.183 "validity": 0.0 00:27:19.183 }, 00:27:19.183 { 00:27:19.183 "id": 9, 00:27:19.183 "state": "FREE", 00:27:19.183 "validity": 0.0 00:27:19.183 }, 00:27:19.183 { 00:27:19.183 "id": 10, 00:27:19.183 "state": "FREE", 00:27:19.183 "validity": 0.0 00:27:19.183 }, 00:27:19.183 { 00:27:19.183 "id": 11, 00:27:19.183 "state": "FREE", 00:27:19.183 "validity": 0.0 00:27:19.183 }, 00:27:19.183 { 00:27:19.183 "id": 12, 00:27:19.183 "state": "FREE", 00:27:19.183 "validity": 0.0 00:27:19.183 }, 00:27:19.183 { 00:27:19.183 "id": 13, 00:27:19.183 "state": "FREE", 00:27:19.183 "validity": 0.0 00:27:19.183 }, 00:27:19.183 { 00:27:19.183 "id": 14, 00:27:19.183 "state": "FREE", 00:27:19.183 "validity": 0.0 00:27:19.183 }, 00:27:19.183 { 00:27:19.183 "id": 15, 00:27:19.183 "state": "FREE", 00:27:19.183 "validity": 0.0 00:27:19.183 }, 00:27:19.183 { 00:27:19.183 "id": 16, 00:27:19.183 "state": "FREE", 00:27:19.183 "validity": 0.0 00:27:19.183 }, 00:27:19.183 { 00:27:19.183 "id": 17, 00:27:19.183 "state": "FREE", 00:27:19.183 "validity": 0.0 00:27:19.183 } 00:27:19.183 ], 00:27:19.183 "read-only": true 00:27:19.183 }, 00:27:19.183 { 00:27:19.183 "name": "cache_device", 00:27:19.183 "type": "bdev", 00:27:19.183 "chunks": [ 00:27:19.183 { 00:27:19.183 "id": 0, 00:27:19.183 "state": "CLOSED", 00:27:19.183 "utilization": 1.0 00:27:19.183 }, 00:27:19.183 { 00:27:19.183 "id": 1, 00:27:19.183 "state": "CLOSED", 00:27:19.183 "utilization": 1.0 00:27:19.183 }, 00:27:19.183 { 00:27:19.183 "id": 2, 00:27:19.183 "state": "OPEN", 00:27:19.183 "utilization": 0.001953125 00:27:19.183 }, 00:27:19.183 { 00:27:19.183 "id": 3, 00:27:19.183 "state": "OPEN", 00:27:19.183 "utilization": 0.0 00:27:19.183 } 00:27:19.183 ], 00:27:19.183 "read-only": true 00:27:19.183 }, 00:27:19.183 { 00:27:19.183 "name": "verbose_mode", 00:27:19.183 "value": true, 00:27:19.183 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:19.183 }, 00:27:19.183 { 00:27:19.183 "name": "prep_upgrade_on_shutdown", 00:27:19.183 "value": false, 00:27:19.183 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:19.183 } 00:27:19.183 ] 00:27:19.183 } 00:27:19.183 14:22:22 -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:27:19.442 [2024-12-08 14:22:22.187445] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:19.442 [2024-12-08 14:22:22.187481] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:19.442 [2024-12-08 14:22:22.187491] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:19.442 [2024-12-08 14:22:22.187497] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:19.442 [2024-12-08 14:22:22.187514] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:19.442 [2024-12-08 14:22:22.187520] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:19.442 [2024-12-08 14:22:22.187526] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:19.442 [2024-12-08 14:22:22.187532] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:19.442 [2024-12-08 14:22:22.187546] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:19.442 [2024-12-08 14:22:22.187552] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:19.442 [2024-12-08 14:22:22.187558] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:19.442 [2024-12-08 14:22:22.187563] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:19.442 [2024-12-08 14:22:22.187605] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.153 ms, result 0 00:27:19.442 true 00:27:19.442 14:22:22 -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:27:19.442 14:22:22 -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:19.442 14:22:22 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:19.701 14:22:22 -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:27:19.701 14:22:22 -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:27:19.701 14:22:22 -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:19.701 [2024-12-08 14:22:22.543759] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:19.701 [2024-12-08 14:22:22.543868] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:19.701 [2024-12-08 14:22:22.543930] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:19.701 [2024-12-08 14:22:22.543948] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:19.701 [2024-12-08 14:22:22.543990] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:19.701 [2024-12-08 14:22:22.544008] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:19.701 [2024-12-08 14:22:22.544022] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:27:19.701 [2024-12-08 14:22:22.544037] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:19.701 [2024-12-08 14:22:22.544060] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:19.701 [2024-12-08 14:22:22.544075] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:19.701 [2024-12-08 14:22:22.544090] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:19.701 [2024-12-08 14:22:22.544131] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:19.701 [2024-12-08 14:22:22.544190] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.416 ms, result 0 00:27:19.701 true 00:27:19.701 14:22:22 -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:19.984 { 00:27:19.984 "name": "ftl", 00:27:19.984 "properties": [ 00:27:19.984 { 00:27:19.984 "name": "superblock_version", 00:27:19.984 "value": 5, 00:27:19.984 "read-only": true 00:27:19.984 }, 00:27:19.984 { 00:27:19.984 "name": "base_device", 00:27:19.984 "bands": [ 00:27:19.984 { 00:27:19.984 "id": 0, 00:27:19.984 "state": "FREE", 00:27:19.984 "validity": 0.0 00:27:19.984 }, 00:27:19.984 { 00:27:19.984 "id": 1, 00:27:19.984 "state": "FREE", 00:27:19.984 "validity": 0.0 00:27:19.984 }, 00:27:19.984 { 00:27:19.984 "id": 2, 00:27:19.984 "state": "FREE", 00:27:19.984 "validity": 0.0 00:27:19.984 }, 00:27:19.984 { 00:27:19.984 "id": 3, 00:27:19.984 "state": "FREE", 00:27:19.984 "validity": 0.0 00:27:19.984 }, 00:27:19.984 { 00:27:19.984 "id": 4, 00:27:19.984 "state": "FREE", 00:27:19.984 "validity": 0.0 00:27:19.984 }, 00:27:19.984 { 00:27:19.984 "id": 5, 00:27:19.984 "state": "FREE", 00:27:19.984 "validity": 0.0 00:27:19.984 }, 00:27:19.984 { 00:27:19.984 "id": 6, 00:27:19.984 "state": "FREE", 00:27:19.984 "validity": 0.0 00:27:19.984 }, 00:27:19.984 { 00:27:19.984 "id": 7, 00:27:19.984 "state": "FREE", 00:27:19.984 "validity": 0.0 00:27:19.984 }, 00:27:19.984 { 00:27:19.984 "id": 8, 00:27:19.984 "state": "FREE", 00:27:19.984 "validity": 0.0 00:27:19.984 }, 00:27:19.984 { 00:27:19.984 "id": 9, 00:27:19.984 "state": "FREE", 00:27:19.984 "validity": 0.0 00:27:19.984 }, 00:27:19.984 { 00:27:19.984 "id": 10, 00:27:19.984 "state": "FREE", 00:27:19.984 "validity": 0.0 00:27:19.984 }, 00:27:19.984 { 00:27:19.984 "id": 11, 00:27:19.984 "state": "FREE", 00:27:19.984 "validity": 0.0 00:27:19.984 }, 00:27:19.984 { 00:27:19.984 "id": 12, 00:27:19.984 "state": "FREE", 00:27:19.984 "validity": 0.0 00:27:19.984 }, 00:27:19.984 { 00:27:19.984 "id": 13, 00:27:19.984 "state": "FREE", 00:27:19.984 "validity": 0.0 00:27:19.984 }, 00:27:19.984 { 00:27:19.984 "id": 14, 00:27:19.984 "state": "FREE", 00:27:19.984 "validity": 0.0 00:27:19.984 }, 00:27:19.984 { 00:27:19.984 "id": 15, 00:27:19.984 "state": "FREE", 00:27:19.984 "validity": 0.0 00:27:19.984 }, 00:27:19.984 { 00:27:19.984 "id": 16, 00:27:19.984 "state": "FREE", 00:27:19.984 "validity": 0.0 00:27:19.984 }, 00:27:19.984 { 00:27:19.984 "id": 17, 00:27:19.984 "state": "FREE", 00:27:19.984 "validity": 0.0 00:27:19.984 } 00:27:19.984 ], 00:27:19.984 "read-only": true 00:27:19.984 }, 00:27:19.984 { 00:27:19.984 "name": "cache_device", 00:27:19.984 "type": "bdev", 00:27:19.984 "chunks": [ 00:27:19.984 { 00:27:19.984 "id": 0, 00:27:19.984 "state": "CLOSED", 00:27:19.984 "utilization": 1.0 00:27:19.984 }, 00:27:19.984 { 00:27:19.984 "id": 1, 00:27:19.984 "state": "CLOSED", 00:27:19.984 "utilization": 1.0 00:27:19.984 }, 00:27:19.984 { 00:27:19.984 "id": 2, 00:27:19.984 "state": "OPEN", 00:27:19.984 "utilization": 0.001953125 00:27:19.984 }, 00:27:19.984 { 00:27:19.984 "id": 3, 00:27:19.984 "state": "OPEN", 00:27:19.984 "utilization": 0.0 00:27:19.984 } 00:27:19.984 ], 00:27:19.984 "read-only": true 00:27:19.984 }, 00:27:19.984 { 00:27:19.984 "name": "verbose_mode", 00:27:19.984 "value": true, 00:27:19.984 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:19.984 }, 00:27:19.984 { 00:27:19.984 "name": "prep_upgrade_on_shutdown", 00:27:19.984 "value": true, 00:27:19.984 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:19.984 } 00:27:19.984 ] 00:27:19.984 } 00:27:19.984 14:22:22 -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:27:19.984 14:22:22 -- ftl/common.sh@130 -- # [[ -n 78667 ]] 00:27:19.984 14:22:22 -- ftl/common.sh@131 -- # killprocess 78667 00:27:19.984 14:22:22 -- common/autotest_common.sh@936 -- # '[' -z 78667 ']' 00:27:19.984 14:22:22 -- common/autotest_common.sh@940 -- # kill -0 78667 00:27:19.984 14:22:22 -- common/autotest_common.sh@941 -- # uname 00:27:19.984 14:22:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:19.984 14:22:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78667 00:27:19.984 14:22:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:19.984 14:22:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:19.984 killing process with pid 78667 00:27:19.984 14:22:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78667' 00:27:19.984 14:22:22 -- common/autotest_common.sh@955 -- # kill 78667 00:27:19.984 14:22:22 -- common/autotest_common.sh@960 -- # wait 78667 00:27:20.553 [2024-12-08 14:22:23.293190] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_0 00:27:20.553 [2024-12-08 14:22:23.303246] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.553 [2024-12-08 14:22:23.303279] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:20.553 [2024-12-08 14:22:23.303289] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:20.553 [2024-12-08 14:22:23.303295] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.553 [2024-12-08 14:22:23.303311] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:20.553 [2024-12-08 14:22:23.305406] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.553 [2024-12-08 14:22:23.305438] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:20.553 [2024-12-08 14:22:23.305446] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.084 ms 00:27:20.553 [2024-12-08 14:22:23.305452] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:28.690 [2024-12-08 14:22:31.470864] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:28.690 [2024-12-08 14:22:31.470916] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:28.690 [2024-12-08 14:22:31.470928] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8165.363 ms 00:27:28.690 [2024-12-08 14:22:31.470935] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:28.690 [2024-12-08 14:22:31.471943] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:28.690 [2024-12-08 14:22:31.471969] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:28.690 [2024-12-08 14:22:31.471977] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.997 ms 00:27:28.690 [2024-12-08 14:22:31.471992] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:28.690 [2024-12-08 14:22:31.472837] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:28.690 [2024-12-08 14:22:31.472853] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:27:28.691 [2024-12-08 14:22:31.472860] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.825 ms 00:27:28.691 [2024-12-08 14:22:31.472865] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:28.691 [2024-12-08 14:22:31.480610] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:28.691 [2024-12-08 14:22:31.480636] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:28.691 [2024-12-08 14:22:31.480643] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.710 ms 00:27:28.691 [2024-12-08 14:22:31.480649] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:28.691 [2024-12-08 14:22:31.485808] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:28.691 [2024-12-08 14:22:31.485834] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:28.691 [2024-12-08 14:22:31.485842] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 5.134 ms 00:27:28.691 [2024-12-08 14:22:31.485848] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:28.691 [2024-12-08 14:22:31.485904] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:28.691 [2024-12-08 14:22:31.485912] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:28.691 [2024-12-08 14:22:31.485918] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:27:28.691 [2024-12-08 14:22:31.485924] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:28.691 [2024-12-08 14:22:31.493390] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:28.691 [2024-12-08 14:22:31.493498] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:27:28.691 [2024-12-08 14:22:31.493510] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.450 ms 00:27:28.691 [2024-12-08 14:22:31.493516] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:28.691 [2024-12-08 14:22:31.500865] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:28.691 [2024-12-08 14:22:31.500959] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:27:28.691 [2024-12-08 14:22:31.500970] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.327 ms 00:27:28.691 [2024-12-08 14:22:31.500975] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:28.691 [2024-12-08 14:22:31.508127] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:28.691 [2024-12-08 14:22:31.508153] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:28.691 [2024-12-08 14:22:31.508160] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.121 ms 00:27:28.691 [2024-12-08 14:22:31.508165] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:28.691 [2024-12-08 14:22:31.515462] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:28.691 [2024-12-08 14:22:31.515555] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:28.691 [2024-12-08 14:22:31.515566] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.251 ms 00:27:28.691 [2024-12-08 14:22:31.515572] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:28.691 [2024-12-08 14:22:31.515592] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:28.691 [2024-12-08 14:22:31.515602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:28.691 [2024-12-08 14:22:31.515610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:28.691 [2024-12-08 14:22:31.515616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:28.691 [2024-12-08 14:22:31.515621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:28.691 [2024-12-08 14:22:31.515627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:28.691 [2024-12-08 14:22:31.515632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:28.691 [2024-12-08 14:22:31.515638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:28.691 [2024-12-08 14:22:31.515643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:28.691 [2024-12-08 14:22:31.515649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:28.691 [2024-12-08 14:22:31.515655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:28.691 [2024-12-08 14:22:31.515660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:28.691 [2024-12-08 14:22:31.515665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:28.691 [2024-12-08 14:22:31.515671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:28.691 [2024-12-08 14:22:31.515676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:28.691 [2024-12-08 14:22:31.515682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:28.691 [2024-12-08 14:22:31.515693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:28.691 [2024-12-08 14:22:31.515699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:28.691 [2024-12-08 14:22:31.515705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:28.691 [2024-12-08 14:22:31.515712] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:28.691 [2024-12-08 14:22:31.515717] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: d407888f-f904-40e3-9a0a-82db9f134e12 00:27:28.691 [2024-12-08 14:22:31.515723] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:28.691 [2024-12-08 14:22:31.515729] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:27:28.691 [2024-12-08 14:22:31.515734] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:27:28.691 [2024-12-08 14:22:31.515740] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:27:28.691 [2024-12-08 14:22:31.515745] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:28.691 [2024-12-08 14:22:31.515750] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:28.691 [2024-12-08 14:22:31.515756] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:28.691 [2024-12-08 14:22:31.515760] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:28.691 [2024-12-08 14:22:31.515765] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:28.691 [2024-12-08 14:22:31.515772] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:28.691 [2024-12-08 14:22:31.515781] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:28.691 [2024-12-08 14:22:31.515787] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.180 ms 00:27:28.691 [2024-12-08 14:22:31.515792] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:28.691 [2024-12-08 14:22:31.525648] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:28.691 [2024-12-08 14:22:31.525671] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:28.691 [2024-12-08 14:22:31.525680] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.836 ms 00:27:28.691 [2024-12-08 14:22:31.525686] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:28.691 [2024-12-08 14:22:31.525841] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:28.691 [2024-12-08 14:22:31.525848] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:28.691 [2024-12-08 14:22:31.525854] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.140 ms 00:27:28.691 [2024-12-08 14:22:31.525859] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:28.691 [2024-12-08 14:22:31.561561] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:28.691 [2024-12-08 14:22:31.561666] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:28.691 [2024-12-08 14:22:31.561678] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:28.691 [2024-12-08 14:22:31.561685] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:28.691 [2024-12-08 14:22:31.561711] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:28.691 [2024-12-08 14:22:31.561719] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:28.691 [2024-12-08 14:22:31.561725] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:28.691 [2024-12-08 14:22:31.561730] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:28.692 [2024-12-08 14:22:31.561782] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:28.692 [2024-12-08 14:22:31.561790] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:28.692 [2024-12-08 14:22:31.561796] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:28.692 [2024-12-08 14:22:31.561802] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:28.692 [2024-12-08 14:22:31.561814] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:28.692 [2024-12-08 14:22:31.561822] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:28.692 [2024-12-08 14:22:31.561827] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:28.692 [2024-12-08 14:22:31.561833] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:28.952 [2024-12-08 14:22:31.620832] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:28.952 [2024-12-08 14:22:31.620952] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:28.952 [2024-12-08 14:22:31.620965] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:28.952 [2024-12-08 14:22:31.620972] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:28.952 [2024-12-08 14:22:31.643876] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:28.952 [2024-12-08 14:22:31.643962] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:28.952 [2024-12-08 14:22:31.643973] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:28.952 [2024-12-08 14:22:31.643979] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:28.952 [2024-12-08 14:22:31.644038] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:28.952 [2024-12-08 14:22:31.644045] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:28.952 [2024-12-08 14:22:31.644051] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:28.952 [2024-12-08 14:22:31.644056] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:28.952 [2024-12-08 14:22:31.644086] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:28.952 [2024-12-08 14:22:31.644093] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:28.952 [2024-12-08 14:22:31.644102] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:28.952 [2024-12-08 14:22:31.644108] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:28.952 [2024-12-08 14:22:31.644181] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:28.952 [2024-12-08 14:22:31.644188] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:28.952 [2024-12-08 14:22:31.644194] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:28.952 [2024-12-08 14:22:31.644200] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:28.952 [2024-12-08 14:22:31.644223] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:28.952 [2024-12-08 14:22:31.644229] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:28.952 [2024-12-08 14:22:31.644235] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:28.952 [2024-12-08 14:22:31.644243] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:28.952 [2024-12-08 14:22:31.644268] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:28.952 [2024-12-08 14:22:31.644275] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:28.952 [2024-12-08 14:22:31.644281] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:28.952 [2024-12-08 14:22:31.644286] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:28.952 [2024-12-08 14:22:31.644319] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:28.952 [2024-12-08 14:22:31.644326] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:28.952 [2024-12-08 14:22:31.644334] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:28.952 [2024-12-08 14:22:31.644339] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:28.952 [2024-12-08 14:22:31.644426] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8341.132 ms, result 0 00:27:32.253 14:22:35 -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:32.253 14:22:35 -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:27:32.253 14:22:35 -- ftl/common.sh@81 -- # local base_bdev= 00:27:32.253 14:22:35 -- ftl/common.sh@82 -- # local cache_bdev= 00:27:32.253 14:22:35 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:32.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:32.253 14:22:35 -- ftl/common.sh@89 -- # spdk_tgt_pid=79230 00:27:32.253 14:22:35 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:32.253 14:22:35 -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:32.253 14:22:35 -- ftl/common.sh@91 -- # waitforlisten 79230 00:27:32.253 14:22:35 -- common/autotest_common.sh@829 -- # '[' -z 79230 ']' 00:27:32.253 14:22:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:32.253 14:22:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:32.253 14:22:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:32.253 14:22:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:32.253 14:22:35 -- common/autotest_common.sh@10 -- # set +x 00:27:32.253 [2024-12-08 14:22:35.102137] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:32.253 [2024-12-08 14:22:35.103071] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79230 ] 00:27:32.511 [2024-12-08 14:22:35.252329] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:32.511 [2024-12-08 14:22:35.404816] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:32.511 [2024-12-08 14:22:35.405081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:33.075 [2024-12-08 14:22:35.934344] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:33.075 [2024-12-08 14:22:35.934394] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:33.334 [2024-12-08 14:22:36.070482] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.334 [2024-12-08 14:22:36.070517] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:33.334 [2024-12-08 14:22:36.070528] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:33.334 [2024-12-08 14:22:36.070534] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.334 [2024-12-08 14:22:36.070571] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.334 [2024-12-08 14:22:36.070581] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:33.334 [2024-12-08 14:22:36.070587] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:27:33.334 [2024-12-08 14:22:36.070593] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.334 [2024-12-08 14:22:36.070607] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:33.334 [2024-12-08 14:22:36.071175] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:33.334 [2024-12-08 14:22:36.071187] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.334 [2024-12-08 14:22:36.071193] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:33.334 [2024-12-08 14:22:36.071199] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.583 ms 00:27:33.334 [2024-12-08 14:22:36.071205] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.334 [2024-12-08 14:22:36.072141] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:33.334 [2024-12-08 14:22:36.081764] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.334 [2024-12-08 14:22:36.081791] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:33.334 [2024-12-08 14:22:36.081800] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.624 ms 00:27:33.334 [2024-12-08 14:22:36.081806] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.334 [2024-12-08 14:22:36.081902] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.334 [2024-12-08 14:22:36.081910] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:33.334 [2024-12-08 14:22:36.081916] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:27:33.334 [2024-12-08 14:22:36.081922] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.334 [2024-12-08 14:22:36.086320] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.334 [2024-12-08 14:22:36.086345] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:33.334 [2024-12-08 14:22:36.086351] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.357 ms 00:27:33.334 [2024-12-08 14:22:36.086360] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.334 [2024-12-08 14:22:36.086388] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.334 [2024-12-08 14:22:36.086394] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:33.334 [2024-12-08 14:22:36.086401] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:27:33.334 [2024-12-08 14:22:36.086406] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.334 [2024-12-08 14:22:36.086443] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.334 [2024-12-08 14:22:36.086450] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:33.334 [2024-12-08 14:22:36.086456] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:33.334 [2024-12-08 14:22:36.086462] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.334 [2024-12-08 14:22:36.086484] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:33.334 [2024-12-08 14:22:36.089269] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.334 [2024-12-08 14:22:36.089389] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:33.334 [2024-12-08 14:22:36.089405] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.792 ms 00:27:33.334 [2024-12-08 14:22:36.089412] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.334 [2024-12-08 14:22:36.089440] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.334 [2024-12-08 14:22:36.089447] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:33.334 [2024-12-08 14:22:36.089452] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:33.334 [2024-12-08 14:22:36.089458] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.334 [2024-12-08 14:22:36.089474] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:33.334 [2024-12-08 14:22:36.089487] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:27:33.334 [2024-12-08 14:22:36.089512] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:33.334 [2024-12-08 14:22:36.089525] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:27:33.334 [2024-12-08 14:22:36.089581] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:27:33.334 [2024-12-08 14:22:36.089589] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:33.334 [2024-12-08 14:22:36.089596] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:27:33.334 [2024-12-08 14:22:36.089604] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:33.334 [2024-12-08 14:22:36.089610] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:33.334 [2024-12-08 14:22:36.089616] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:33.334 [2024-12-08 14:22:36.089623] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:33.334 [2024-12-08 14:22:36.089628] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:27:33.334 [2024-12-08 14:22:36.089636] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:27:33.334 [2024-12-08 14:22:36.089641] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.334 [2024-12-08 14:22:36.089647] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:33.334 [2024-12-08 14:22:36.089652] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.169 ms 00:27:33.334 [2024-12-08 14:22:36.089658] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.334 [2024-12-08 14:22:36.089705] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.334 [2024-12-08 14:22:36.089711] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:33.334 [2024-12-08 14:22:36.089716] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:27:33.334 [2024-12-08 14:22:36.089722] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.334 [2024-12-08 14:22:36.089779] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:33.334 [2024-12-08 14:22:36.089786] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:33.334 [2024-12-08 14:22:36.089792] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:33.334 [2024-12-08 14:22:36.089798] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:33.334 [2024-12-08 14:22:36.089803] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:33.334 [2024-12-08 14:22:36.089808] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:33.334 [2024-12-08 14:22:36.089813] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:33.334 [2024-12-08 14:22:36.089818] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:33.334 [2024-12-08 14:22:36.089824] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:33.334 [2024-12-08 14:22:36.089829] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:33.334 [2024-12-08 14:22:36.089834] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:33.334 [2024-12-08 14:22:36.089839] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:33.334 [2024-12-08 14:22:36.089844] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:33.334 [2024-12-08 14:22:36.089850] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:33.334 [2024-12-08 14:22:36.089856] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:27:33.334 [2024-12-08 14:22:36.089860] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:33.334 [2024-12-08 14:22:36.089865] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:33.335 [2024-12-08 14:22:36.089870] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:27:33.335 [2024-12-08 14:22:36.089875] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:33.335 [2024-12-08 14:22:36.089880] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:27:33.335 [2024-12-08 14:22:36.089885] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:27:33.335 [2024-12-08 14:22:36.089890] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:27:33.335 [2024-12-08 14:22:36.089895] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:33.335 [2024-12-08 14:22:36.089900] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:33.335 [2024-12-08 14:22:36.089905] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:33.335 [2024-12-08 14:22:36.089910] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:33.335 [2024-12-08 14:22:36.089915] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:27:33.335 [2024-12-08 14:22:36.089919] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:33.335 [2024-12-08 14:22:36.089924] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:33.335 [2024-12-08 14:22:36.089929] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:33.335 [2024-12-08 14:22:36.089934] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:33.335 [2024-12-08 14:22:36.089939] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:33.335 [2024-12-08 14:22:36.089943] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:27:33.335 [2024-12-08 14:22:36.089948] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:33.335 [2024-12-08 14:22:36.089953] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:33.335 [2024-12-08 14:22:36.089958] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:33.335 [2024-12-08 14:22:36.089963] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:33.335 [2024-12-08 14:22:36.089967] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:33.335 [2024-12-08 14:22:36.089972] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:27:33.335 [2024-12-08 14:22:36.089977] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:33.335 [2024-12-08 14:22:36.089996] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:33.335 [2024-12-08 14:22:36.090002] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:33.335 [2024-12-08 14:22:36.090007] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:33.335 [2024-12-08 14:22:36.090013] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:33.335 [2024-12-08 14:22:36.090019] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:33.335 [2024-12-08 14:22:36.090025] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:33.335 [2024-12-08 14:22:36.090030] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:33.335 [2024-12-08 14:22:36.090036] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:33.335 [2024-12-08 14:22:36.090040] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:33.335 [2024-12-08 14:22:36.090045] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:33.335 [2024-12-08 14:22:36.090051] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:33.335 [2024-12-08 14:22:36.090058] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:33.335 [2024-12-08 14:22:36.090066] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:33.335 [2024-12-08 14:22:36.090071] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:27:33.335 [2024-12-08 14:22:36.090077] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:27:33.335 [2024-12-08 14:22:36.090082] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:27:33.335 [2024-12-08 14:22:36.090087] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:27:33.335 [2024-12-08 14:22:36.090097] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:27:33.335 [2024-12-08 14:22:36.090103] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:27:33.335 [2024-12-08 14:22:36.090108] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:27:33.335 [2024-12-08 14:22:36.090113] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:27:33.335 [2024-12-08 14:22:36.090118] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:27:33.335 [2024-12-08 14:22:36.090124] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:27:33.335 [2024-12-08 14:22:36.090129] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:27:33.335 [2024-12-08 14:22:36.090135] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:27:33.335 [2024-12-08 14:22:36.090139] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:33.335 [2024-12-08 14:22:36.090145] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:33.335 [2024-12-08 14:22:36.090151] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:33.335 [2024-12-08 14:22:36.090156] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:33.335 [2024-12-08 14:22:36.090162] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:33.335 [2024-12-08 14:22:36.090167] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:33.335 [2024-12-08 14:22:36.090173] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.335 [2024-12-08 14:22:36.090178] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:33.335 [2024-12-08 14:22:36.090184] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.427 ms 00:27:33.335 [2024-12-08 14:22:36.090189] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.335 [2024-12-08 14:22:36.101838] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.335 [2024-12-08 14:22:36.101867] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:33.335 [2024-12-08 14:22:36.101875] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 11.617 ms 00:27:33.335 [2024-12-08 14:22:36.101881] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.335 [2024-12-08 14:22:36.101908] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.335 [2024-12-08 14:22:36.101915] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:33.335 [2024-12-08 14:22:36.101921] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:27:33.335 [2024-12-08 14:22:36.101927] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.335 [2024-12-08 14:22:36.125817] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.335 [2024-12-08 14:22:36.125843] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:33.335 [2024-12-08 14:22:36.125852] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 23.851 ms 00:27:33.335 [2024-12-08 14:22:36.125859] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.335 [2024-12-08 14:22:36.125879] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.335 [2024-12-08 14:22:36.125886] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:33.335 [2024-12-08 14:22:36.125893] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:33.335 [2024-12-08 14:22:36.125900] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.335 [2024-12-08 14:22:36.126231] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.335 [2024-12-08 14:22:36.126247] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:33.335 [2024-12-08 14:22:36.126254] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.296 ms 00:27:33.335 [2024-12-08 14:22:36.126259] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.335 [2024-12-08 14:22:36.126289] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.335 [2024-12-08 14:22:36.126295] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:33.335 [2024-12-08 14:22:36.126301] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:27:33.335 [2024-12-08 14:22:36.126307] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.335 [2024-12-08 14:22:36.138195] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.335 [2024-12-08 14:22:36.138219] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:33.335 [2024-12-08 14:22:36.138226] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 11.871 ms 00:27:33.335 [2024-12-08 14:22:36.138231] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.335 [2024-12-08 14:22:36.148536] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:27:33.335 [2024-12-08 14:22:36.148642] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:33.335 [2024-12-08 14:22:36.148658] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.335 [2024-12-08 14:22:36.148664] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:27:33.335 [2024-12-08 14:22:36.148671] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 10.354 ms 00:27:33.335 [2024-12-08 14:22:36.148681] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.335 [2024-12-08 14:22:36.159508] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.335 [2024-12-08 14:22:36.159604] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:27:33.335 [2024-12-08 14:22:36.159617] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 10.800 ms 00:27:33.335 [2024-12-08 14:22:36.159624] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.335 [2024-12-08 14:22:36.168710] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.335 [2024-12-08 14:22:36.168736] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:27:33.335 [2024-12-08 14:22:36.168743] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.058 ms 00:27:33.336 [2024-12-08 14:22:36.168749] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.336 [2024-12-08 14:22:36.177624] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.336 [2024-12-08 14:22:36.177648] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:27:33.336 [2024-12-08 14:22:36.177655] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8.847 ms 00:27:33.336 [2024-12-08 14:22:36.177661] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.336 [2024-12-08 14:22:36.177930] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.336 [2024-12-08 14:22:36.177939] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:33.336 [2024-12-08 14:22:36.177945] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.211 ms 00:27:33.336 [2024-12-08 14:22:36.177951] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.336 [2024-12-08 14:22:36.223713] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.336 [2024-12-08 14:22:36.223744] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:33.336 [2024-12-08 14:22:36.223753] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 45.749 ms 00:27:33.336 [2024-12-08 14:22:36.223758] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.336 [2024-12-08 14:22:36.231975] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:33.336 [2024-12-08 14:22:36.232505] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.336 [2024-12-08 14:22:36.232529] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:33.336 [2024-12-08 14:22:36.232536] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8.709 ms 00:27:33.336 [2024-12-08 14:22:36.232545] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.336 [2024-12-08 14:22:36.232587] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.336 [2024-12-08 14:22:36.232594] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:27:33.336 [2024-12-08 14:22:36.232601] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:33.336 [2024-12-08 14:22:36.232606] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.336 [2024-12-08 14:22:36.232637] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.336 [2024-12-08 14:22:36.232644] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:33.336 [2024-12-08 14:22:36.232650] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:27:33.336 [2024-12-08 14:22:36.232656] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.336 [2024-12-08 14:22:36.233642] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.336 [2024-12-08 14:22:36.233667] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:27:33.336 [2024-12-08 14:22:36.233675] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.970 ms 00:27:33.336 [2024-12-08 14:22:36.233680] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.336 [2024-12-08 14:22:36.233699] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.336 [2024-12-08 14:22:36.233705] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:33.336 [2024-12-08 14:22:36.233711] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:33.336 [2024-12-08 14:22:36.233716] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.336 [2024-12-08 14:22:36.233743] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:33.336 [2024-12-08 14:22:36.233750] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.336 [2024-12-08 14:22:36.233758] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:33.336 [2024-12-08 14:22:36.233764] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:33.336 [2024-12-08 14:22:36.233769] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.595 [2024-12-08 14:22:36.251135] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.596 [2024-12-08 14:22:36.251162] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:33.596 [2024-12-08 14:22:36.251170] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 17.351 ms 00:27:33.596 [2024-12-08 14:22:36.251176] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.596 [2024-12-08 14:22:36.251232] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.596 [2024-12-08 14:22:36.251239] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:33.596 [2024-12-08 14:22:36.251245] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:27:33.596 [2024-12-08 14:22:36.251250] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.596 [2024-12-08 14:22:36.251965] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 181.172 ms, result 0 00:27:33.596 [2024-12-08 14:22:36.267467] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:33.596 [2024-12-08 14:22:36.283468] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:27:33.596 [2024-12-08 14:22:36.291595] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:33.854 14:22:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:33.854 14:22:36 -- common/autotest_common.sh@862 -- # return 0 00:27:33.854 14:22:36 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:33.854 14:22:36 -- ftl/common.sh@95 -- # return 0 00:27:33.854 14:22:36 -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:33.854 [2024-12-08 14:22:36.761011] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.854 [2024-12-08 14:22:36.761063] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:33.854 [2024-12-08 14:22:36.761076] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:33.854 [2024-12-08 14:22:36.761084] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.854 [2024-12-08 14:22:36.761107] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.854 [2024-12-08 14:22:36.761115] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:33.854 [2024-12-08 14:22:36.761123] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:33.854 [2024-12-08 14:22:36.761134] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.854 [2024-12-08 14:22:36.761153] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.854 [2024-12-08 14:22:36.761161] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:33.854 [2024-12-08 14:22:36.761169] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:33.854 [2024-12-08 14:22:36.761176] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.854 [2024-12-08 14:22:36.761230] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.223 ms, result 0 00:27:33.854 true 00:27:34.112 14:22:36 -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:34.112 { 00:27:34.112 "name": "ftl", 00:27:34.112 "properties": [ 00:27:34.112 { 00:27:34.112 "name": "superblock_version", 00:27:34.112 "value": 5, 00:27:34.112 "read-only": true 00:27:34.112 }, 00:27:34.112 { 00:27:34.112 "name": "base_device", 00:27:34.112 "bands": [ 00:27:34.112 { 00:27:34.112 "id": 0, 00:27:34.112 "state": "CLOSED", 00:27:34.112 "validity": 1.0 00:27:34.112 }, 00:27:34.112 { 00:27:34.112 "id": 1, 00:27:34.112 "state": "CLOSED", 00:27:34.113 "validity": 1.0 00:27:34.113 }, 00:27:34.113 { 00:27:34.113 "id": 2, 00:27:34.113 "state": "CLOSED", 00:27:34.113 "validity": 0.007843137254901933 00:27:34.113 }, 00:27:34.113 { 00:27:34.113 "id": 3, 00:27:34.113 "state": "FREE", 00:27:34.113 "validity": 0.0 00:27:34.113 }, 00:27:34.113 { 00:27:34.113 "id": 4, 00:27:34.113 "state": "FREE", 00:27:34.113 "validity": 0.0 00:27:34.113 }, 00:27:34.113 { 00:27:34.113 "id": 5, 00:27:34.113 "state": "FREE", 00:27:34.113 "validity": 0.0 00:27:34.113 }, 00:27:34.113 { 00:27:34.113 "id": 6, 00:27:34.113 "state": "FREE", 00:27:34.113 "validity": 0.0 00:27:34.113 }, 00:27:34.113 { 00:27:34.113 "id": 7, 00:27:34.113 "state": "FREE", 00:27:34.113 "validity": 0.0 00:27:34.113 }, 00:27:34.113 { 00:27:34.113 "id": 8, 00:27:34.113 "state": "FREE", 00:27:34.113 "validity": 0.0 00:27:34.113 }, 00:27:34.113 { 00:27:34.113 "id": 9, 00:27:34.113 "state": "FREE", 00:27:34.113 "validity": 0.0 00:27:34.113 }, 00:27:34.113 { 00:27:34.113 "id": 10, 00:27:34.113 "state": "FREE", 00:27:34.113 "validity": 0.0 00:27:34.113 }, 00:27:34.113 { 00:27:34.113 "id": 11, 00:27:34.113 "state": "FREE", 00:27:34.113 "validity": 0.0 00:27:34.113 }, 00:27:34.113 { 00:27:34.113 "id": 12, 00:27:34.113 "state": "FREE", 00:27:34.113 "validity": 0.0 00:27:34.113 }, 00:27:34.113 { 00:27:34.113 "id": 13, 00:27:34.113 "state": "FREE", 00:27:34.113 "validity": 0.0 00:27:34.113 }, 00:27:34.113 { 00:27:34.113 "id": 14, 00:27:34.113 "state": "FREE", 00:27:34.113 "validity": 0.0 00:27:34.113 }, 00:27:34.113 { 00:27:34.113 "id": 15, 00:27:34.113 "state": "FREE", 00:27:34.113 "validity": 0.0 00:27:34.113 }, 00:27:34.113 { 00:27:34.113 "id": 16, 00:27:34.113 "state": "FREE", 00:27:34.113 "validity": 0.0 00:27:34.113 }, 00:27:34.113 { 00:27:34.113 "id": 17, 00:27:34.113 "state": "FREE", 00:27:34.113 "validity": 0.0 00:27:34.113 } 00:27:34.113 ], 00:27:34.113 "read-only": true 00:27:34.113 }, 00:27:34.113 { 00:27:34.113 "name": "cache_device", 00:27:34.113 "type": "bdev", 00:27:34.113 "chunks": [ 00:27:34.113 { 00:27:34.113 "id": 0, 00:27:34.113 "state": "OPEN", 00:27:34.113 "utilization": 0.0 00:27:34.113 }, 00:27:34.113 { 00:27:34.113 "id": 1, 00:27:34.113 "state": "OPEN", 00:27:34.113 "utilization": 0.0 00:27:34.113 }, 00:27:34.113 { 00:27:34.113 "id": 2, 00:27:34.113 "state": "FREE", 00:27:34.113 "utilization": 0.0 00:27:34.113 }, 00:27:34.113 { 00:27:34.113 "id": 3, 00:27:34.113 "state": "FREE", 00:27:34.113 "utilization": 0.0 00:27:34.113 } 00:27:34.113 ], 00:27:34.113 "read-only": true 00:27:34.113 }, 00:27:34.113 { 00:27:34.113 "name": "verbose_mode", 00:27:34.113 "value": true, 00:27:34.113 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:34.113 }, 00:27:34.113 { 00:27:34.113 "name": "prep_upgrade_on_shutdown", 00:27:34.113 "value": false, 00:27:34.113 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:34.113 } 00:27:34.113 ] 00:27:34.113 } 00:27:34.113 14:22:36 -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:27:34.113 14:22:36 -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:34.113 14:22:36 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:34.371 14:22:37 -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:27:34.371 14:22:37 -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:27:34.371 14:22:37 -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:27:34.371 14:22:37 -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:27:34.371 14:22:37 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:34.629 14:22:37 -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:27:34.629 14:22:37 -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:27:34.629 14:22:37 -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:27:34.629 14:22:37 -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:34.629 14:22:37 -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:34.629 Validate MD5 checksum, iteration 1 00:27:34.629 14:22:37 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:34.629 14:22:37 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:34.629 14:22:37 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:34.629 14:22:37 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:34.629 14:22:37 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:34.629 14:22:37 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:34.629 14:22:37 -- ftl/common.sh@154 -- # return 0 00:27:34.629 14:22:37 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:34.629 [2024-12-08 14:22:37.439318] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:34.629 [2024-12-08 14:22:37.439522] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79269 ] 00:27:34.886 [2024-12-08 14:22:37.588989] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:34.886 [2024-12-08 14:22:37.782425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:36.790  [2024-12-08T14:22:39.971Z] Copying: 653/1024 [MB] (653 MBps) [2024-12-08T14:22:41.353Z] Copying: 1024/1024 [MB] (average 640 MBps) 00:27:38.433 00:27:38.433 14:22:41 -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:38.433 14:22:41 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:40.408 14:22:43 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:40.408 14:22:43 -- ftl/upgrade_shutdown.sh@103 -- # sum=5938fcbd59e818f8bb72219552c2bd7e 00:27:40.408 14:22:43 -- ftl/upgrade_shutdown.sh@105 -- # [[ 5938fcbd59e818f8bb72219552c2bd7e != \5\9\3\8\f\c\b\d\5\9\e\8\1\8\f\8\b\b\7\2\2\1\9\5\5\2\c\2\b\d\7\e ]] 00:27:40.408 14:22:43 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:40.666 14:22:43 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:40.666 14:22:43 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:40.666 Validate MD5 checksum, iteration 2 00:27:40.666 14:22:43 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:40.666 14:22:43 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:40.666 14:22:43 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:40.666 14:22:43 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:40.666 14:22:43 -- ftl/common.sh@154 -- # return 0 00:27:40.666 14:22:43 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:40.666 [2024-12-08 14:22:43.378939] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:40.666 [2024-12-08 14:22:43.379050] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79337 ] 00:27:40.666 [2024-12-08 14:22:43.522796] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:40.924 [2024-12-08 14:22:43.685961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:42.303  [2024-12-08T14:22:45.792Z] Copying: 701/1024 [MB] (701 MBps) [2024-12-08T14:22:46.733Z] Copying: 1024/1024 [MB] (average 687 MBps) 00:27:43.813 00:27:43.813 14:22:46 -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:27:43.813 14:22:46 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:45.720 14:22:48 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:45.720 14:22:48 -- ftl/upgrade_shutdown.sh@103 -- # sum=8256785e9e1bfa4c53835981f9c0f5b5 00:27:45.720 14:22:48 -- ftl/upgrade_shutdown.sh@105 -- # [[ 8256785e9e1bfa4c53835981f9c0f5b5 != \8\2\5\6\7\8\5\e\9\e\1\b\f\a\4\c\5\3\8\3\5\9\8\1\f\9\c\0\f\5\b\5 ]] 00:27:45.720 14:22:48 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:45.720 14:22:48 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:45.720 14:22:48 -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:27:45.720 14:22:48 -- ftl/common.sh@137 -- # [[ -n 79230 ]] 00:27:45.720 14:22:48 -- ftl/common.sh@138 -- # kill -9 79230 00:27:45.720 14:22:48 -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:27:45.720 14:22:48 -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:27:45.720 14:22:48 -- ftl/common.sh@81 -- # local base_bdev= 00:27:45.720 14:22:48 -- ftl/common.sh@82 -- # local cache_bdev= 00:27:45.720 14:22:48 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:45.720 14:22:48 -- ftl/common.sh@89 -- # spdk_tgt_pid=79397 00:27:45.720 14:22:48 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:45.720 14:22:48 -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:45.720 14:22:48 -- ftl/common.sh@91 -- # waitforlisten 79397 00:27:45.720 14:22:48 -- common/autotest_common.sh@829 -- # '[' -z 79397 ']' 00:27:45.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:45.720 14:22:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:45.720 14:22:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:45.720 14:22:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:45.720 14:22:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:45.720 14:22:48 -- common/autotest_common.sh@10 -- # set +x 00:27:45.720 [2024-12-08 14:22:48.630310] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:45.720 [2024-12-08 14:22:48.630429] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79397 ] 00:27:45.980 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 828: 79230 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:27:45.980 [2024-12-08 14:22:48.781779] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:46.241 [2024-12-08 14:22:48.951417] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:46.241 [2024-12-08 14:22:48.951597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:46.812 [2024-12-08 14:22:49.532223] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:46.812 [2024-12-08 14:22:49.532279] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:46.813 [2024-12-08 14:22:49.672358] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.813 [2024-12-08 14:22:49.672388] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:46.813 [2024-12-08 14:22:49.672398] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:46.813 [2024-12-08 14:22:49.672405] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.813 [2024-12-08 14:22:49.672450] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.813 [2024-12-08 14:22:49.672460] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:46.813 [2024-12-08 14:22:49.672467] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:27:46.813 [2024-12-08 14:22:49.672473] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.813 [2024-12-08 14:22:49.672488] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:46.813 [2024-12-08 14:22:49.673263] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:46.813 [2024-12-08 14:22:49.673302] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.813 [2024-12-08 14:22:49.673310] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:46.813 [2024-12-08 14:22:49.673318] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.818 ms 00:27:46.813 [2024-12-08 14:22:49.673324] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.813 [2024-12-08 14:22:49.673583] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:46.813 [2024-12-08 14:22:49.687556] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.813 [2024-12-08 14:22:49.687704] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:46.813 [2024-12-08 14:22:49.687719] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 13.974 ms 00:27:46.813 [2024-12-08 14:22:49.687727] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.813 [2024-12-08 14:22:49.694876] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.813 [2024-12-08 14:22:49.694975] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:46.813 [2024-12-08 14:22:49.695002] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:27:46.813 [2024-12-08 14:22:49.695008] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.813 [2024-12-08 14:22:49.695266] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.813 [2024-12-08 14:22:49.695276] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:46.813 [2024-12-08 14:22:49.695283] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.193 ms 00:27:46.813 [2024-12-08 14:22:49.695289] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.813 [2024-12-08 14:22:49.695316] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.813 [2024-12-08 14:22:49.695323] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:46.813 [2024-12-08 14:22:49.695330] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:27:46.813 [2024-12-08 14:22:49.695338] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.813 [2024-12-08 14:22:49.695357] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.813 [2024-12-08 14:22:49.695364] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:46.813 [2024-12-08 14:22:49.695372] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:46.813 [2024-12-08 14:22:49.695378] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.813 [2024-12-08 14:22:49.695399] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:46.813 [2024-12-08 14:22:49.697788] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.813 [2024-12-08 14:22:49.697812] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:46.813 [2024-12-08 14:22:49.697819] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.398 ms 00:27:46.813 [2024-12-08 14:22:49.697826] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.813 [2024-12-08 14:22:49.697848] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.813 [2024-12-08 14:22:49.697855] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:46.813 [2024-12-08 14:22:49.697863] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:46.813 [2024-12-08 14:22:49.697869] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.813 [2024-12-08 14:22:49.697886] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:46.813 [2024-12-08 14:22:49.697902] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:27:46.813 [2024-12-08 14:22:49.697929] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:46.813 [2024-12-08 14:22:49.697941] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:27:46.813 [2024-12-08 14:22:49.698026] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:27:46.813 [2024-12-08 14:22:49.698038] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:46.813 [2024-12-08 14:22:49.698048] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:27:46.813 [2024-12-08 14:22:49.698056] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:46.813 [2024-12-08 14:22:49.698062] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:46.813 [2024-12-08 14:22:49.698068] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:46.813 [2024-12-08 14:22:49.698074] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:46.813 [2024-12-08 14:22:49.698080] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:27:46.813 [2024-12-08 14:22:49.698086] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:27:46.813 [2024-12-08 14:22:49.698092] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.813 [2024-12-08 14:22:49.698098] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:46.813 [2024-12-08 14:22:49.698103] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.208 ms 00:27:46.813 [2024-12-08 14:22:49.698111] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.813 [2024-12-08 14:22:49.698159] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.813 [2024-12-08 14:22:49.698166] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:46.813 [2024-12-08 14:22:49.698172] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:27:46.813 [2024-12-08 14:22:49.698177] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.813 [2024-12-08 14:22:49.698236] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:46.813 [2024-12-08 14:22:49.698244] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:46.813 [2024-12-08 14:22:49.698250] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:46.813 [2024-12-08 14:22:49.698257] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:46.813 [2024-12-08 14:22:49.698265] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:46.813 [2024-12-08 14:22:49.698272] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:46.813 [2024-12-08 14:22:49.698277] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:46.813 [2024-12-08 14:22:49.698282] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:46.813 [2024-12-08 14:22:49.698287] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:46.813 [2024-12-08 14:22:49.698295] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:46.813 [2024-12-08 14:22:49.698301] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:46.813 [2024-12-08 14:22:49.698306] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:46.813 [2024-12-08 14:22:49.698311] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:46.813 [2024-12-08 14:22:49.698316] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:46.813 [2024-12-08 14:22:49.698322] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:27:46.813 [2024-12-08 14:22:49.698327] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:46.813 [2024-12-08 14:22:49.698332] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:46.813 [2024-12-08 14:22:49.698337] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:27:46.813 [2024-12-08 14:22:49.698342] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:46.813 [2024-12-08 14:22:49.698347] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:27:46.813 [2024-12-08 14:22:49.698352] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:27:46.813 [2024-12-08 14:22:49.698358] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:27:46.813 [2024-12-08 14:22:49.698364] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:46.813 [2024-12-08 14:22:49.698369] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:46.813 [2024-12-08 14:22:49.698374] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:46.813 [2024-12-08 14:22:49.698379] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:46.813 [2024-12-08 14:22:49.698384] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:27:46.813 [2024-12-08 14:22:49.698389] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:46.813 [2024-12-08 14:22:49.698394] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:46.813 [2024-12-08 14:22:49.698399] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:46.813 [2024-12-08 14:22:49.698404] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:46.813 [2024-12-08 14:22:49.698410] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:46.813 [2024-12-08 14:22:49.698415] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:27:46.813 [2024-12-08 14:22:49.698420] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:46.813 [2024-12-08 14:22:49.698425] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:46.813 [2024-12-08 14:22:49.698430] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:46.813 [2024-12-08 14:22:49.698435] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:46.813 [2024-12-08 14:22:49.698440] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:46.813 [2024-12-08 14:22:49.698445] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:27:46.814 [2024-12-08 14:22:49.698450] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:46.814 [2024-12-08 14:22:49.698455] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:46.814 [2024-12-08 14:22:49.698463] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:46.814 [2024-12-08 14:22:49.698469] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:46.814 [2024-12-08 14:22:49.698475] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:46.814 [2024-12-08 14:22:49.698481] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:46.814 [2024-12-08 14:22:49.698486] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:46.814 [2024-12-08 14:22:49.698491] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:46.814 [2024-12-08 14:22:49.698496] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:46.814 [2024-12-08 14:22:49.698501] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:46.814 [2024-12-08 14:22:49.698507] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:46.814 [2024-12-08 14:22:49.698513] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:46.814 [2024-12-08 14:22:49.698521] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:46.814 [2024-12-08 14:22:49.698527] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:46.814 [2024-12-08 14:22:49.698533] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:27:46.814 [2024-12-08 14:22:49.698538] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:27:46.814 [2024-12-08 14:22:49.698547] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:27:46.814 [2024-12-08 14:22:49.698553] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:27:46.814 [2024-12-08 14:22:49.698558] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:27:46.814 [2024-12-08 14:22:49.698563] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:27:46.814 [2024-12-08 14:22:49.698568] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:27:46.814 [2024-12-08 14:22:49.698574] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:27:46.814 [2024-12-08 14:22:49.698579] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:27:46.814 [2024-12-08 14:22:49.698584] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:27:46.814 [2024-12-08 14:22:49.698590] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:27:46.814 [2024-12-08 14:22:49.698595] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:27:46.814 [2024-12-08 14:22:49.698600] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:46.814 [2024-12-08 14:22:49.698606] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:46.814 [2024-12-08 14:22:49.698613] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:46.814 [2024-12-08 14:22:49.698619] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:46.814 [2024-12-08 14:22:49.698626] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:46.814 [2024-12-08 14:22:49.698632] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:46.814 [2024-12-08 14:22:49.698637] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.814 [2024-12-08 14:22:49.698643] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:46.814 [2024-12-08 14:22:49.698650] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.437 ms 00:27:46.814 [2024-12-08 14:22:49.698658] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.814 [2024-12-08 14:22:49.710536] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.814 [2024-12-08 14:22:49.710563] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:46.814 [2024-12-08 14:22:49.710573] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 11.845 ms 00:27:46.814 [2024-12-08 14:22:49.710580] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.814 [2024-12-08 14:22:49.710609] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.814 [2024-12-08 14:22:49.710616] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:46.814 [2024-12-08 14:22:49.710623] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:27:46.814 [2024-12-08 14:22:49.710629] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.077 [2024-12-08 14:22:49.737344] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.077 [2024-12-08 14:22:49.737460] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:47.077 [2024-12-08 14:22:49.737474] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 26.679 ms 00:27:47.077 [2024-12-08 14:22:49.737480] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.077 [2024-12-08 14:22:49.737506] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.077 [2024-12-08 14:22:49.737512] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:47.077 [2024-12-08 14:22:49.737518] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:47.077 [2024-12-08 14:22:49.737524] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.077 [2024-12-08 14:22:49.737596] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.077 [2024-12-08 14:22:49.737604] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:47.077 [2024-12-08 14:22:49.737612] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:27:47.077 [2024-12-08 14:22:49.737619] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.077 [2024-12-08 14:22:49.737649] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.077 [2024-12-08 14:22:49.737658] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:47.077 [2024-12-08 14:22:49.737664] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:27:47.077 [2024-12-08 14:22:49.737670] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.077 [2024-12-08 14:22:49.751711] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.077 [2024-12-08 14:22:49.751738] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:47.077 [2024-12-08 14:22:49.751746] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 14.023 ms 00:27:47.077 [2024-12-08 14:22:49.751752] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.077 [2024-12-08 14:22:49.751828] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.077 [2024-12-08 14:22:49.751836] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:27:47.077 [2024-12-08 14:22:49.751843] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:47.077 [2024-12-08 14:22:49.751849] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.077 [2024-12-08 14:22:49.765689] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.077 [2024-12-08 14:22:49.765717] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:27:47.077 [2024-12-08 14:22:49.765726] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 13.826 ms 00:27:47.077 [2024-12-08 14:22:49.765732] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.077 [2024-12-08 14:22:49.772830] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.077 [2024-12-08 14:22:49.772856] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:47.077 [2024-12-08 14:22:49.772864] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.213 ms 00:27:47.077 [2024-12-08 14:22:49.772870] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.077 [2024-12-08 14:22:49.821393] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.077 [2024-12-08 14:22:49.821425] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:47.077 [2024-12-08 14:22:49.821435] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 48.484 ms 00:27:47.077 [2024-12-08 14:22:49.821442] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.077 [2024-12-08 14:22:49.821511] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:27:47.077 [2024-12-08 14:22:49.821543] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:27:47.077 [2024-12-08 14:22:49.821575] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:27:47.077 [2024-12-08 14:22:49.821607] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:27:47.077 [2024-12-08 14:22:49.821614] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.077 [2024-12-08 14:22:49.821620] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:27:47.077 [2024-12-08 14:22:49.821630] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.137 ms 00:27:47.077 [2024-12-08 14:22:49.821638] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.077 [2024-12-08 14:22:49.821677] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:27:47.077 [2024-12-08 14:22:49.821686] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.077 [2024-12-08 14:22:49.821693] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:27:47.077 [2024-12-08 14:22:49.821699] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:27:47.077 [2024-12-08 14:22:49.821705] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.077 [2024-12-08 14:22:49.833826] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.077 [2024-12-08 14:22:49.833866] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:27:47.077 [2024-12-08 14:22:49.833875] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.104 ms 00:27:47.077 [2024-12-08 14:22:49.833882] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.077 [2024-12-08 14:22:49.840248] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.077 [2024-12-08 14:22:49.840273] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:27:47.077 [2024-12-08 14:22:49.840282] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:27:47.078 [2024-12-08 14:22:49.840288] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.078 [2024-12-08 14:22:49.840329] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.078 [2024-12-08 14:22:49.840336] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover unmap map 00:27:47.078 [2024-12-08 14:22:49.840343] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:47.078 [2024-12-08 14:22:49.840349] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.078 [2024-12-08 14:22:49.840503] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 8032, seq id 14 00:27:47.651 [2024-12-08 14:22:50.384022] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 8032, seq id 14 00:27:47.651 [2024-12-08 14:22:50.384340] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 270176, seq id 15 00:27:48.591 [2024-12-08 14:22:51.179774] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 270176, seq id 15 00:27:48.591 [2024-12-08 14:22:51.180011] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:48.591 [2024-12-08 14:22:51.180028] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:48.591 [2024-12-08 14:22:51.180038] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.591 [2024-12-08 14:22:51.180046] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:27:48.591 [2024-12-08 14:22:51.180059] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1339.670 ms 00:27:48.591 [2024-12-08 14:22:51.180066] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.591 [2024-12-08 14:22:51.180104] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.591 [2024-12-08 14:22:51.180111] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:27:48.591 [2024-12-08 14:22:51.180118] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:48.591 [2024-12-08 14:22:51.180125] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.591 [2024-12-08 14:22:51.189829] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:48.591 [2024-12-08 14:22:51.189929] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.591 [2024-12-08 14:22:51.189937] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:48.591 [2024-12-08 14:22:51.189945] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.790 ms 00:27:48.591 [2024-12-08 14:22:51.189951] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.591 [2024-12-08 14:22:51.190510] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.591 [2024-12-08 14:22:51.190631] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from SHM 00:27:48.591 [2024-12-08 14:22:51.190641] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.483 ms 00:27:48.591 [2024-12-08 14:22:51.190647] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.591 [2024-12-08 14:22:51.192355] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.591 [2024-12-08 14:22:51.192368] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:27:48.591 [2024-12-08 14:22:51.192376] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.695 ms 00:27:48.591 [2024-12-08 14:22:51.192382] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.591 [2024-12-08 14:22:51.212199] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.591 [2024-12-08 14:22:51.212231] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Complete unmap transaction 00:27:48.591 [2024-12-08 14:22:51.212240] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 19.799 ms 00:27:48.591 [2024-12-08 14:22:51.212247] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.591 [2024-12-08 14:22:51.212327] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.591 [2024-12-08 14:22:51.212335] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:48.591 [2024-12-08 14:22:51.212343] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:27:48.591 [2024-12-08 14:22:51.212348] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.591 [2024-12-08 14:22:51.213453] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.591 [2024-12-08 14:22:51.213567] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:27:48.591 [2024-12-08 14:22:51.213579] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.091 ms 00:27:48.591 [2024-12-08 14:22:51.213586] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.591 [2024-12-08 14:22:51.213608] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.591 [2024-12-08 14:22:51.213614] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:48.591 [2024-12-08 14:22:51.213621] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:48.591 [2024-12-08 14:22:51.213627] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.591 [2024-12-08 14:22:51.213657] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:48.591 [2024-12-08 14:22:51.213665] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.591 [2024-12-08 14:22:51.213672] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:48.591 [2024-12-08 14:22:51.213680] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:27:48.591 [2024-12-08 14:22:51.213686] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.591 [2024-12-08 14:22:51.213732] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.591 [2024-12-08 14:22:51.213740] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:48.591 [2024-12-08 14:22:51.213746] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:27:48.591 [2024-12-08 14:22:51.213752] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.591 [2024-12-08 14:22:51.214634] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1541.891 ms, result 0 00:27:48.591 [2024-12-08 14:22:51.228470] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:48.591 [2024-12-08 14:22:51.244457] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:27:48.591 [2024-12-08 14:22:51.252591] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:48.591 Validate MD5 checksum, iteration 1 00:27:48.591 14:22:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:48.591 14:22:51 -- common/autotest_common.sh@862 -- # return 0 00:27:48.591 14:22:51 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:48.591 14:22:51 -- ftl/common.sh@95 -- # return 0 00:27:48.591 14:22:51 -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:27:48.591 14:22:51 -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:48.591 14:22:51 -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:48.591 14:22:51 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:48.591 14:22:51 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:48.591 14:22:51 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:48.591 14:22:51 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:48.591 14:22:51 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:48.591 14:22:51 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:48.591 14:22:51 -- ftl/common.sh@154 -- # return 0 00:27:48.591 14:22:51 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:48.851 [2024-12-08 14:22:51.553437] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:48.851 [2024-12-08 14:22:51.553662] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79437 ] 00:27:48.851 [2024-12-08 14:22:51.703010] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:49.112 [2024-12-08 14:22:51.877684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:50.513  [2024-12-08T14:22:54.003Z] Copying: 666/1024 [MB] (666 MBps) [2024-12-08T14:22:55.388Z] Copying: 1024/1024 [MB] (average 663 MBps) 00:27:52.468 00:27:52.468 14:22:55 -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:52.468 14:22:55 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:54.383 14:22:56 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:54.383 14:22:56 -- ftl/upgrade_shutdown.sh@103 -- # sum=5938fcbd59e818f8bb72219552c2bd7e 00:27:54.383 14:22:56 -- ftl/upgrade_shutdown.sh@105 -- # [[ 5938fcbd59e818f8bb72219552c2bd7e != \5\9\3\8\f\c\b\d\5\9\e\8\1\8\f\8\b\b\7\2\2\1\9\5\5\2\c\2\b\d\7\e ]] 00:27:54.383 14:22:56 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:54.383 14:22:56 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:54.383 14:22:56 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:54.383 Validate MD5 checksum, iteration 2 00:27:54.383 14:22:56 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:54.383 14:22:56 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:54.383 14:22:56 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:54.383 14:22:56 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:54.383 14:22:56 -- ftl/common.sh@154 -- # return 0 00:27:54.383 14:22:56 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:54.383 [2024-12-08 14:22:57.007297] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:54.383 [2024-12-08 14:22:57.007389] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79504 ] 00:27:54.383 [2024-12-08 14:22:57.151204] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:54.644 [2024-12-08 14:22:57.322610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:56.030  [2024-12-08T14:22:59.521Z] Copying: 723/1024 [MB] (723 MBps) [2024-12-08T14:23:03.721Z] Copying: 1024/1024 [MB] (average 696 MBps) 00:28:00.801 00:28:00.801 14:23:03 -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:00.801 14:23:03 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:02.700 14:23:05 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:02.700 14:23:05 -- ftl/upgrade_shutdown.sh@103 -- # sum=8256785e9e1bfa4c53835981f9c0f5b5 00:28:02.700 14:23:05 -- ftl/upgrade_shutdown.sh@105 -- # [[ 8256785e9e1bfa4c53835981f9c0f5b5 != \8\2\5\6\7\8\5\e\9\e\1\b\f\a\4\c\5\3\8\3\5\9\8\1\f\9\c\0\f\5\b\5 ]] 00:28:02.700 14:23:05 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:02.700 14:23:05 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:02.700 14:23:05 -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:28:02.700 14:23:05 -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:28:02.700 14:23:05 -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:28:02.700 14:23:05 -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:02.700 14:23:05 -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:28:02.700 14:23:05 -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:28:02.700 14:23:05 -- ftl/common.sh@193 -- # tcp_target_cleanup 00:28:02.700 14:23:05 -- ftl/common.sh@144 -- # tcp_target_shutdown 00:28:02.700 14:23:05 -- ftl/common.sh@130 -- # [[ -n 79397 ]] 00:28:02.700 14:23:05 -- ftl/common.sh@131 -- # killprocess 79397 00:28:02.700 14:23:05 -- common/autotest_common.sh@936 -- # '[' -z 79397 ']' 00:28:02.700 14:23:05 -- common/autotest_common.sh@940 -- # kill -0 79397 00:28:02.700 14:23:05 -- common/autotest_common.sh@941 -- # uname 00:28:02.700 14:23:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:02.700 14:23:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79397 00:28:02.700 killing process with pid 79397 00:28:02.700 14:23:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:02.700 14:23:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:02.700 14:23:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79397' 00:28:02.700 14:23:05 -- common/autotest_common.sh@955 -- # kill 79397 00:28:02.700 14:23:05 -- common/autotest_common.sh@960 -- # wait 79397 00:28:02.960 [2024-12-08 14:23:05.765044] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_0 00:28:02.960 [2024-12-08 14:23:05.777283] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.960 [2024-12-08 14:23:05.777316] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:02.960 [2024-12-08 14:23:05.777327] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:02.960 [2024-12-08 14:23:05.777333] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.960 [2024-12-08 14:23:05.777349] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:02.960 [2024-12-08 14:23:05.779402] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.960 [2024-12-08 14:23:05.779428] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:02.960 [2024-12-08 14:23:05.779436] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.043 ms 00:28:02.960 [2024-12-08 14:23:05.779442] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.960 [2024-12-08 14:23:05.779606] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.960 [2024-12-08 14:23:05.779615] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:28:02.960 [2024-12-08 14:23:05.779622] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.149 ms 00:28:02.960 [2024-12-08 14:23:05.779628] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.960 [2024-12-08 14:23:05.780737] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.960 [2024-12-08 14:23:05.780839] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:28:02.960 [2024-12-08 14:23:05.780851] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.099 ms 00:28:02.960 [2024-12-08 14:23:05.780858] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.960 [2024-12-08 14:23:05.781720] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.960 [2024-12-08 14:23:05.781740] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:28:02.960 [2024-12-08 14:23:05.781747] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.838 ms 00:28:02.960 [2024-12-08 14:23:05.781753] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.960 [2024-12-08 14:23:05.789183] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.960 [2024-12-08 14:23:05.789207] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:28:02.960 [2024-12-08 14:23:05.789215] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.394 ms 00:28:02.960 [2024-12-08 14:23:05.789221] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.960 [2024-12-08 14:23:05.793282] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.960 [2024-12-08 14:23:05.793309] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:28:02.960 [2024-12-08 14:23:05.793317] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.037 ms 00:28:02.960 [2024-12-08 14:23:05.793323] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.960 [2024-12-08 14:23:05.793380] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.960 [2024-12-08 14:23:05.793387] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:28:02.960 [2024-12-08 14:23:05.793394] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:28:02.960 [2024-12-08 14:23:05.793400] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.960 [2024-12-08 14:23:05.800588] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.960 [2024-12-08 14:23:05.800611] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:28:02.960 [2024-12-08 14:23:05.800617] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.176 ms 00:28:02.960 [2024-12-08 14:23:05.800622] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.960 [2024-12-08 14:23:05.808099] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.960 [2024-12-08 14:23:05.808122] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:28:02.960 [2024-12-08 14:23:05.808128] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.454 ms 00:28:02.960 [2024-12-08 14:23:05.808133] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.960 [2024-12-08 14:23:05.815148] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.960 [2024-12-08 14:23:05.815245] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:28:02.960 [2024-12-08 14:23:05.815257] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 6.992 ms 00:28:02.960 [2024-12-08 14:23:05.815262] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.960 [2024-12-08 14:23:05.822448] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.960 [2024-12-08 14:23:05.822535] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:28:02.960 [2024-12-08 14:23:05.822546] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.138 ms 00:28:02.960 [2024-12-08 14:23:05.822551] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.960 [2024-12-08 14:23:05.822572] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:28:02.960 [2024-12-08 14:23:05.822582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:02.960 [2024-12-08 14:23:05.822593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:28:02.960 [2024-12-08 14:23:05.822599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:28:02.960 [2024-12-08 14:23:05.822605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:02.960 [2024-12-08 14:23:05.822611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:02.960 [2024-12-08 14:23:05.822616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:02.960 [2024-12-08 14:23:05.822622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:02.960 [2024-12-08 14:23:05.822628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:02.960 [2024-12-08 14:23:05.822633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:02.960 [2024-12-08 14:23:05.822640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:02.960 [2024-12-08 14:23:05.822646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:02.960 [2024-12-08 14:23:05.822652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:02.960 [2024-12-08 14:23:05.822657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:02.960 [2024-12-08 14:23:05.822663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:02.960 [2024-12-08 14:23:05.822673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:02.960 [2024-12-08 14:23:05.822678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:02.960 [2024-12-08 14:23:05.822684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:02.960 [2024-12-08 14:23:05.822689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:02.960 [2024-12-08 14:23:05.822696] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:28:02.960 [2024-12-08 14:23:05.822702] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: d407888f-f904-40e3-9a0a-82db9f134e12 00:28:02.960 [2024-12-08 14:23:05.822707] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:28:02.960 [2024-12-08 14:23:05.822713] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:28:02.960 [2024-12-08 14:23:05.822718] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:28:02.960 [2024-12-08 14:23:05.822724] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:28:02.960 [2024-12-08 14:23:05.822729] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:28:02.960 [2024-12-08 14:23:05.822735] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:28:02.960 [2024-12-08 14:23:05.822740] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:28:02.960 [2024-12-08 14:23:05.822747] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:28:02.960 [2024-12-08 14:23:05.822752] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:28:02.960 [2024-12-08 14:23:05.822757] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.960 [2024-12-08 14:23:05.822764] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:28:02.961 [2024-12-08 14:23:05.822770] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.186 ms 00:28:02.961 [2024-12-08 14:23:05.822778] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.961 [2024-12-08 14:23:05.832566] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.961 [2024-12-08 14:23:05.832589] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:28:02.961 [2024-12-08 14:23:05.832597] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.776 ms 00:28:02.961 [2024-12-08 14:23:05.832604] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.961 [2024-12-08 14:23:05.832750] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.961 [2024-12-08 14:23:05.832756] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:28:02.961 [2024-12-08 14:23:05.832766] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.132 ms 00:28:02.961 [2024-12-08 14:23:05.832772] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.961 [2024-12-08 14:23:05.867876] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:02.961 [2024-12-08 14:23:05.867901] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:02.961 [2024-12-08 14:23:05.867909] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:02.961 [2024-12-08 14:23:05.867915] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.961 [2024-12-08 14:23:05.867937] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:02.961 [2024-12-08 14:23:05.867944] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:02.961 [2024-12-08 14:23:05.867953] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:02.961 [2024-12-08 14:23:05.867959] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.961 [2024-12-08 14:23:05.868014] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:02.961 [2024-12-08 14:23:05.868022] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:02.961 [2024-12-08 14:23:05.868028] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:02.961 [2024-12-08 14:23:05.868033] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.961 [2024-12-08 14:23:05.868046] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:02.961 [2024-12-08 14:23:05.868052] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:02.961 [2024-12-08 14:23:05.868058] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:02.961 [2024-12-08 14:23:05.868066] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.220 [2024-12-08 14:23:05.926571] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:03.220 [2024-12-08 14:23:05.926602] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:03.220 [2024-12-08 14:23:05.926611] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:03.220 [2024-12-08 14:23:05.926617] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.220 [2024-12-08 14:23:05.949195] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:03.220 [2024-12-08 14:23:05.949307] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:03.220 [2024-12-08 14:23:05.949322] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:03.220 [2024-12-08 14:23:05.949328] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.220 [2024-12-08 14:23:05.949370] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:03.220 [2024-12-08 14:23:05.949377] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:03.220 [2024-12-08 14:23:05.949383] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:03.220 [2024-12-08 14:23:05.949388] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.220 [2024-12-08 14:23:05.949418] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:03.220 [2024-12-08 14:23:05.949425] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:03.220 [2024-12-08 14:23:05.949431] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:03.220 [2024-12-08 14:23:05.949436] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.220 [2024-12-08 14:23:05.949511] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:03.220 [2024-12-08 14:23:05.949519] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:03.220 [2024-12-08 14:23:05.949524] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:03.220 [2024-12-08 14:23:05.949530] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.220 [2024-12-08 14:23:05.949551] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:03.220 [2024-12-08 14:23:05.949558] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:28:03.220 [2024-12-08 14:23:05.949564] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:03.220 [2024-12-08 14:23:05.949569] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.220 [2024-12-08 14:23:05.949596] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:03.220 [2024-12-08 14:23:05.949603] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:03.220 [2024-12-08 14:23:05.949609] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:03.220 [2024-12-08 14:23:05.949614] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.220 [2024-12-08 14:23:05.949646] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:03.220 [2024-12-08 14:23:05.949653] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:03.220 [2024-12-08 14:23:05.949658] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:03.220 [2024-12-08 14:23:05.949664] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.220 [2024-12-08 14:23:05.949756] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 172.452 ms, result 0 00:28:03.789 14:23:06 -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:28:03.789 14:23:06 -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:03.789 14:23:06 -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:28:03.789 14:23:06 -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:28:03.789 14:23:06 -- ftl/common.sh@181 -- # [[ -n '' ]] 00:28:03.789 14:23:06 -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:03.789 Remove shared memory files 00:28:03.789 14:23:06 -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:28:03.789 14:23:06 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:03.789 14:23:06 -- ftl/common.sh@205 -- # rm -f rm -f 00:28:03.789 14:23:06 -- ftl/common.sh@206 -- # rm -f rm -f 00:28:03.789 14:23:06 -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid79230 00:28:03.789 14:23:06 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:03.789 14:23:06 -- ftl/common.sh@209 -- # rm -f rm -f 00:28:03.789 ************************************ 00:28:03.789 END TEST ftl_upgrade_shutdown 00:28:03.789 ************************************ 00:28:03.789 00:28:03.789 real 1m19.744s 00:28:03.789 user 1m52.210s 00:28:03.789 sys 0m18.648s 00:28:03.789 14:23:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:03.789 14:23:06 -- common/autotest_common.sh@10 -- # set +x 00:28:03.789 Process with pid 70558 is not found 00:28:03.789 14:23:06 -- ftl/ftl.sh@82 -- # '[' -eq 1 ']' 00:28:03.789 /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh: line 82: [: -eq: unary operator expected 00:28:03.789 14:23:06 -- ftl/ftl.sh@89 -- # '[' -eq 1 ']' 00:28:03.789 /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh: line 89: [: -eq: unary operator expected 00:28:03.789 14:23:06 -- ftl/ftl.sh@1 -- # at_ftl_exit 00:28:03.789 14:23:06 -- ftl/ftl.sh@14 -- # killprocess 70558 00:28:03.789 14:23:06 -- common/autotest_common.sh@936 -- # '[' -z 70558 ']' 00:28:03.789 14:23:06 -- common/autotest_common.sh@940 -- # kill -0 70558 00:28:03.789 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (70558) - No such process 00:28:03.789 14:23:06 -- common/autotest_common.sh@963 -- # echo 'Process with pid 70558 is not found' 00:28:03.789 14:23:06 -- ftl/ftl.sh@17 -- # [[ -n 0000:00:07.0 ]] 00:28:03.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:03.789 14:23:06 -- ftl/ftl.sh@19 -- # spdk_tgt_pid=79638 00:28:03.789 14:23:06 -- ftl/ftl.sh@20 -- # waitforlisten 79638 00:28:03.789 14:23:06 -- common/autotest_common.sh@829 -- # '[' -z 79638 ']' 00:28:03.789 14:23:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:03.789 14:23:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:03.789 14:23:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:03.789 14:23:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:03.789 14:23:06 -- common/autotest_common.sh@10 -- # set +x 00:28:03.789 14:23:06 -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:04.048 [2024-12-08 14:23:06.735687] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:04.048 [2024-12-08 14:23:06.735819] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79638 ] 00:28:04.048 [2024-12-08 14:23:06.884410] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:04.306 [2024-12-08 14:23:07.019894] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:04.306 [2024-12-08 14:23:07.020061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:04.873 14:23:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:04.873 14:23:07 -- common/autotest_common.sh@862 -- # return 0 00:28:04.873 14:23:07 -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:28:04.873 nvme0n1 00:28:04.873 14:23:07 -- ftl/ftl.sh@22 -- # clear_lvols 00:28:04.873 14:23:07 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:04.873 14:23:07 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:05.131 14:23:07 -- ftl/common.sh@28 -- # stores=fb53ebd3-1608-4f07-ba4b-b017f970ae58 00:28:05.131 14:23:07 -- ftl/common.sh@29 -- # for lvs in $stores 00:28:05.131 14:23:07 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fb53ebd3-1608-4f07-ba4b-b017f970ae58 00:28:05.389 14:23:08 -- ftl/ftl.sh@23 -- # killprocess 79638 00:28:05.389 14:23:08 -- common/autotest_common.sh@936 -- # '[' -z 79638 ']' 00:28:05.389 14:23:08 -- common/autotest_common.sh@940 -- # kill -0 79638 00:28:05.389 14:23:08 -- common/autotest_common.sh@941 -- # uname 00:28:05.389 14:23:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:05.389 14:23:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79638 00:28:05.389 killing process with pid 79638 00:28:05.389 14:23:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:05.389 14:23:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:05.389 14:23:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79638' 00:28:05.389 14:23:08 -- common/autotest_common.sh@955 -- # kill 79638 00:28:05.389 14:23:08 -- common/autotest_common.sh@960 -- # wait 79638 00:28:06.767 14:23:09 -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:06.767 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:06.767 Waiting for block devices as requested 00:28:06.767 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:28:07.026 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:28:07.027 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:28:07.027 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:28:12.375 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:28:12.375 Remove shared memory files 00:28:12.375 14:23:15 -- ftl/ftl.sh@28 -- # remove_shm 00:28:12.375 14:23:15 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:12.375 14:23:15 -- ftl/common.sh@205 -- # rm -f rm -f 00:28:12.375 14:23:15 -- ftl/common.sh@206 -- # rm -f rm -f 00:28:12.375 14:23:15 -- ftl/common.sh@207 -- # rm -f rm -f 00:28:12.375 14:23:15 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:12.375 14:23:15 -- ftl/common.sh@209 -- # rm -f rm -f 00:28:12.375 ************************************ 00:28:12.375 END TEST ftl 00:28:12.375 ************************************ 00:28:12.375 00:28:12.375 real 13m19.003s 00:28:12.375 user 15m18.802s 00:28:12.375 sys 1m42.139s 00:28:12.375 14:23:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:12.375 14:23:15 -- common/autotest_common.sh@10 -- # set +x 00:28:12.375 14:23:15 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:28:12.375 14:23:15 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:28:12.375 14:23:15 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:28:12.375 14:23:15 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:28:12.375 14:23:15 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:28:12.375 14:23:15 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:28:12.375 14:23:15 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:28:12.375 14:23:15 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:28:12.375 14:23:15 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:28:12.375 14:23:15 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:28:12.375 14:23:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:12.375 14:23:15 -- common/autotest_common.sh@10 -- # set +x 00:28:12.375 14:23:15 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:28:12.375 14:23:15 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:28:12.375 14:23:15 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:28:12.375 14:23:15 -- common/autotest_common.sh@10 -- # set +x 00:28:13.780 INFO: APP EXITING 00:28:13.780 INFO: killing all VMs 00:28:13.780 INFO: killing vhost app 00:28:13.780 INFO: EXIT DONE 00:28:14.354 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:14.354 0000:00:09.0 (1b36 0010): Already using the nvme driver 00:28:14.354 0000:00:08.0 (1b36 0010): Already using the nvme driver 00:28:14.616 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:28:14.616 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:28:15.188 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:15.188 Cleaning 00:28:15.188 Removing: /var/run/dpdk/spdk0/config 00:28:15.188 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:28:15.188 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:28:15.188 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:28:15.188 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:28:15.188 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:28:15.188 Removing: /var/run/dpdk/spdk0/hugepage_info 00:28:15.188 Removing: /var/run/dpdk/spdk0 00:28:15.188 Removing: /var/run/dpdk/spdk_pid55944 00:28:15.188 Removing: /var/run/dpdk/spdk_pid56140 00:28:15.188 Removing: /var/run/dpdk/spdk_pid56439 00:28:15.450 Removing: /var/run/dpdk/spdk_pid56537 00:28:15.450 Removing: /var/run/dpdk/spdk_pid56621 00:28:15.450 Removing: /var/run/dpdk/spdk_pid56733 00:28:15.450 Removing: /var/run/dpdk/spdk_pid56831 00:28:15.450 Removing: /var/run/dpdk/spdk_pid56876 00:28:15.450 Removing: /var/run/dpdk/spdk_pid56907 00:28:15.450 Removing: /var/run/dpdk/spdk_pid56982 00:28:15.450 Removing: /var/run/dpdk/spdk_pid57076 00:28:15.450 Removing: /var/run/dpdk/spdk_pid57502 00:28:15.450 Removing: /var/run/dpdk/spdk_pid57568 00:28:15.450 Removing: /var/run/dpdk/spdk_pid57626 00:28:15.450 Removing: /var/run/dpdk/spdk_pid57642 00:28:15.450 Removing: /var/run/dpdk/spdk_pid57740 00:28:15.450 Removing: /var/run/dpdk/spdk_pid57764 00:28:15.450 Removing: /var/run/dpdk/spdk_pid57874 00:28:15.450 Removing: /var/run/dpdk/spdk_pid57899 00:28:15.450 Removing: /var/run/dpdk/spdk_pid57952 00:28:15.450 Removing: /var/run/dpdk/spdk_pid57970 00:28:15.450 Removing: /var/run/dpdk/spdk_pid58023 00:28:15.450 Removing: /var/run/dpdk/spdk_pid58043 00:28:15.450 Removing: /var/run/dpdk/spdk_pid58217 00:28:15.450 Removing: /var/run/dpdk/spdk_pid58248 00:28:15.450 Removing: /var/run/dpdk/spdk_pid58336 00:28:15.450 Removing: /var/run/dpdk/spdk_pid58408 00:28:15.450 Removing: /var/run/dpdk/spdk_pid58439 00:28:15.450 Removing: /var/run/dpdk/spdk_pid58517 00:28:15.450 Removing: /var/run/dpdk/spdk_pid58543 00:28:15.450 Removing: /var/run/dpdk/spdk_pid58584 00:28:15.450 Removing: /var/run/dpdk/spdk_pid58610 00:28:15.450 Removing: /var/run/dpdk/spdk_pid58651 00:28:15.450 Removing: /var/run/dpdk/spdk_pid58677 00:28:15.450 Removing: /var/run/dpdk/spdk_pid58718 00:28:15.450 Removing: /var/run/dpdk/spdk_pid58744 00:28:15.450 Removing: /var/run/dpdk/spdk_pid58785 00:28:15.450 Removing: /var/run/dpdk/spdk_pid58811 00:28:15.450 Removing: /var/run/dpdk/spdk_pid58852 00:28:15.450 Removing: /var/run/dpdk/spdk_pid58878 00:28:15.450 Removing: /var/run/dpdk/spdk_pid58919 00:28:15.450 Removing: /var/run/dpdk/spdk_pid58945 00:28:15.450 Removing: /var/run/dpdk/spdk_pid58986 00:28:15.450 Removing: /var/run/dpdk/spdk_pid59020 00:28:15.450 Removing: /var/run/dpdk/spdk_pid59062 00:28:15.450 Removing: /var/run/dpdk/spdk_pid59089 00:28:15.450 Removing: /var/run/dpdk/spdk_pid59130 00:28:15.450 Removing: /var/run/dpdk/spdk_pid59156 00:28:15.450 Removing: /var/run/dpdk/spdk_pid59199 00:28:15.450 Removing: /var/run/dpdk/spdk_pid59225 00:28:15.450 Removing: /var/run/dpdk/spdk_pid59265 00:28:15.450 Removing: /var/run/dpdk/spdk_pid59287 00:28:15.450 Removing: /var/run/dpdk/spdk_pid59332 00:28:15.450 Removing: /var/run/dpdk/spdk_pid59354 00:28:15.450 Removing: /var/run/dpdk/spdk_pid59395 00:28:15.450 Removing: /var/run/dpdk/spdk_pid59416 00:28:15.450 Removing: /var/run/dpdk/spdk_pid59457 00:28:15.450 Removing: /var/run/dpdk/spdk_pid59483 00:28:15.450 Removing: /var/run/dpdk/spdk_pid59524 00:28:15.450 Removing: /var/run/dpdk/spdk_pid59550 00:28:15.450 Removing: /var/run/dpdk/spdk_pid59591 00:28:15.450 Removing: /var/run/dpdk/spdk_pid59620 00:28:15.450 Removing: /var/run/dpdk/spdk_pid59664 00:28:15.450 Removing: /var/run/dpdk/spdk_pid59699 00:28:15.450 Removing: /var/run/dpdk/spdk_pid59743 00:28:15.450 Removing: /var/run/dpdk/spdk_pid59769 00:28:15.450 Removing: /var/run/dpdk/spdk_pid59810 00:28:15.450 Removing: /var/run/dpdk/spdk_pid59847 00:28:15.450 Removing: /var/run/dpdk/spdk_pid59889 00:28:15.450 Removing: /var/run/dpdk/spdk_pid59978 00:28:15.450 Removing: /var/run/dpdk/spdk_pid60085 00:28:15.450 Removing: /var/run/dpdk/spdk_pid60268 00:28:15.450 Removing: /var/run/dpdk/spdk_pid60354 00:28:15.450 Removing: /var/run/dpdk/spdk_pid60390 00:28:15.450 Removing: /var/run/dpdk/spdk_pid60844 00:28:15.450 Removing: /var/run/dpdk/spdk_pid61028 00:28:15.450 Removing: /var/run/dpdk/spdk_pid61137 00:28:15.450 Removing: /var/run/dpdk/spdk_pid61190 00:28:15.450 Removing: /var/run/dpdk/spdk_pid61221 00:28:15.450 Removing: /var/run/dpdk/spdk_pid61304 00:28:15.450 Removing: /var/run/dpdk/spdk_pid61948 00:28:15.450 Removing: /var/run/dpdk/spdk_pid61989 00:28:15.450 Removing: /var/run/dpdk/spdk_pid62464 00:28:15.450 Removing: /var/run/dpdk/spdk_pid62581 00:28:15.450 Removing: /var/run/dpdk/spdk_pid62693 00:28:15.450 Removing: /var/run/dpdk/spdk_pid62746 00:28:15.450 Removing: /var/run/dpdk/spdk_pid62771 00:28:15.450 Removing: /var/run/dpdk/spdk_pid62797 00:28:15.450 Removing: /var/run/dpdk/spdk_pid64732 00:28:15.450 Removing: /var/run/dpdk/spdk_pid64867 00:28:15.713 Removing: /var/run/dpdk/spdk_pid64882 00:28:15.713 Removing: /var/run/dpdk/spdk_pid64894 00:28:15.713 Removing: /var/run/dpdk/spdk_pid64944 00:28:15.713 Removing: /var/run/dpdk/spdk_pid64948 00:28:15.713 Removing: /var/run/dpdk/spdk_pid64961 00:28:15.713 Removing: /var/run/dpdk/spdk_pid65032 00:28:15.713 Removing: /var/run/dpdk/spdk_pid65036 00:28:15.713 Removing: /var/run/dpdk/spdk_pid65059 00:28:15.713 Removing: /var/run/dpdk/spdk_pid65103 00:28:15.713 Removing: /var/run/dpdk/spdk_pid65112 00:28:15.713 Removing: /var/run/dpdk/spdk_pid65124 00:28:15.713 Removing: /var/run/dpdk/spdk_pid66563 00:28:15.713 Removing: /var/run/dpdk/spdk_pid66672 00:28:15.713 Removing: /var/run/dpdk/spdk_pid66812 00:28:15.713 Removing: /var/run/dpdk/spdk_pid66901 00:28:15.713 Removing: /var/run/dpdk/spdk_pid66987 00:28:15.713 Removing: /var/run/dpdk/spdk_pid67064 00:28:15.713 Removing: /var/run/dpdk/spdk_pid67169 00:28:15.713 Removing: /var/run/dpdk/spdk_pid67243 00:28:15.713 Removing: /var/run/dpdk/spdk_pid67384 00:28:15.713 Removing: /var/run/dpdk/spdk_pid67773 00:28:15.713 Removing: /var/run/dpdk/spdk_pid67804 00:28:15.713 Removing: /var/run/dpdk/spdk_pid68270 00:28:15.713 Removing: /var/run/dpdk/spdk_pid68455 00:28:15.713 Removing: /var/run/dpdk/spdk_pid68556 00:28:15.713 Removing: /var/run/dpdk/spdk_pid68660 00:28:15.713 Removing: /var/run/dpdk/spdk_pid68708 00:28:15.713 Removing: /var/run/dpdk/spdk_pid68739 00:28:15.713 Removing: /var/run/dpdk/spdk_pid69049 00:28:15.713 Removing: /var/run/dpdk/spdk_pid69111 00:28:15.713 Removing: /var/run/dpdk/spdk_pid69191 00:28:15.713 Removing: /var/run/dpdk/spdk_pid69595 00:28:15.713 Removing: /var/run/dpdk/spdk_pid69748 00:28:15.713 Removing: /var/run/dpdk/spdk_pid70558 00:28:15.713 Removing: /var/run/dpdk/spdk_pid70691 00:28:15.713 Removing: /var/run/dpdk/spdk_pid70916 00:28:15.713 Removing: /var/run/dpdk/spdk_pid71019 00:28:15.713 Removing: /var/run/dpdk/spdk_pid71310 00:28:15.713 Removing: /var/run/dpdk/spdk_pid71564 00:28:15.713 Removing: /var/run/dpdk/spdk_pid71934 00:28:15.713 Removing: /var/run/dpdk/spdk_pid72120 00:28:15.713 Removing: /var/run/dpdk/spdk_pid72259 00:28:15.713 Removing: /var/run/dpdk/spdk_pid72315 00:28:15.713 Removing: /var/run/dpdk/spdk_pid72499 00:28:15.713 Removing: /var/run/dpdk/spdk_pid72533 00:28:15.713 Removing: /var/run/dpdk/spdk_pid72594 00:28:15.713 Removing: /var/run/dpdk/spdk_pid72823 00:28:15.713 Removing: /var/run/dpdk/spdk_pid73061 00:28:15.713 Removing: /var/run/dpdk/spdk_pid73605 00:28:15.713 Removing: /var/run/dpdk/spdk_pid74316 00:28:15.713 Removing: /var/run/dpdk/spdk_pid74960 00:28:15.713 Removing: /var/run/dpdk/spdk_pid75723 00:28:15.713 Removing: /var/run/dpdk/spdk_pid75878 00:28:15.713 Removing: /var/run/dpdk/spdk_pid75954 00:28:15.713 Removing: /var/run/dpdk/spdk_pid76522 00:28:15.713 Removing: /var/run/dpdk/spdk_pid76579 00:28:15.713 Removing: /var/run/dpdk/spdk_pid77355 00:28:15.713 Removing: /var/run/dpdk/spdk_pid77899 00:28:15.713 Removing: /var/run/dpdk/spdk_pid78667 00:28:15.713 Removing: /var/run/dpdk/spdk_pid78804 00:28:15.713 Removing: /var/run/dpdk/spdk_pid78852 00:28:15.713 Removing: /var/run/dpdk/spdk_pid78916 00:28:15.713 Removing: /var/run/dpdk/spdk_pid78973 00:28:15.713 Removing: /var/run/dpdk/spdk_pid79039 00:28:15.713 Removing: /var/run/dpdk/spdk_pid79230 00:28:15.713 Removing: /var/run/dpdk/spdk_pid79269 00:28:15.713 Removing: /var/run/dpdk/spdk_pid79337 00:28:15.713 Removing: /var/run/dpdk/spdk_pid79397 00:28:15.713 Removing: /var/run/dpdk/spdk_pid79437 00:28:15.713 Removing: /var/run/dpdk/spdk_pid79504 00:28:15.713 Removing: /var/run/dpdk/spdk_pid79638 00:28:15.713 Clean 00:28:15.975 killing process with pid 48168 00:28:15.975 killing process with pid 48171 00:28:15.975 14:23:18 -- common/autotest_common.sh@1446 -- # return 0 00:28:15.975 14:23:18 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:28:15.975 14:23:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:15.975 14:23:18 -- common/autotest_common.sh@10 -- # set +x 00:28:15.975 14:23:18 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:28:15.975 14:23:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:15.975 14:23:18 -- common/autotest_common.sh@10 -- # set +x 00:28:15.975 14:23:18 -- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:28:15.975 14:23:18 -- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:28:15.975 14:23:18 -- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:28:15.975 14:23:18 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:28:15.975 14:23:18 -- spdk/autotest.sh@383 -- # hostname 00:28:15.975 14:23:18 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:28:16.236 geninfo: WARNING: invalid characters removed from testname! 00:28:42.869 14:23:42 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:43.130 14:23:45 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:45.679 14:23:48 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:48.228 14:23:50 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:50.779 14:23:53 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:53.331 14:23:55 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:55.260 14:23:58 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:28:55.260 14:23:58 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:28:55.260 14:23:58 -- common/autotest_common.sh@1690 -- $ lcov --version 00:28:55.260 14:23:58 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:28:55.523 14:23:58 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:28:55.523 14:23:58 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:28:55.523 14:23:58 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:28:55.523 14:23:58 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:28:55.523 14:23:58 -- scripts/common.sh@335 -- $ IFS=.-: 00:28:55.523 14:23:58 -- scripts/common.sh@335 -- $ read -ra ver1 00:28:55.523 14:23:58 -- scripts/common.sh@336 -- $ IFS=.-: 00:28:55.523 14:23:58 -- scripts/common.sh@336 -- $ read -ra ver2 00:28:55.523 14:23:58 -- scripts/common.sh@337 -- $ local 'op=<' 00:28:55.523 14:23:58 -- scripts/common.sh@339 -- $ ver1_l=2 00:28:55.523 14:23:58 -- scripts/common.sh@340 -- $ ver2_l=1 00:28:55.523 14:23:58 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:28:55.523 14:23:58 -- scripts/common.sh@343 -- $ case "$op" in 00:28:55.523 14:23:58 -- scripts/common.sh@344 -- $ : 1 00:28:55.523 14:23:58 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:28:55.523 14:23:58 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:55.523 14:23:58 -- scripts/common.sh@364 -- $ decimal 1 00:28:55.523 14:23:58 -- scripts/common.sh@352 -- $ local d=1 00:28:55.523 14:23:58 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:28:55.523 14:23:58 -- scripts/common.sh@354 -- $ echo 1 00:28:55.523 14:23:58 -- scripts/common.sh@364 -- $ ver1[v]=1 00:28:55.523 14:23:58 -- scripts/common.sh@365 -- $ decimal 2 00:28:55.523 14:23:58 -- scripts/common.sh@352 -- $ local d=2 00:28:55.523 14:23:58 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:28:55.523 14:23:58 -- scripts/common.sh@354 -- $ echo 2 00:28:55.523 14:23:58 -- scripts/common.sh@365 -- $ ver2[v]=2 00:28:55.523 14:23:58 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:28:55.523 14:23:58 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:28:55.523 14:23:58 -- scripts/common.sh@367 -- $ return 0 00:28:55.523 14:23:58 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:55.523 14:23:58 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:28:55.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:55.523 --rc genhtml_branch_coverage=1 00:28:55.523 --rc genhtml_function_coverage=1 00:28:55.523 --rc genhtml_legend=1 00:28:55.523 --rc geninfo_all_blocks=1 00:28:55.523 --rc geninfo_unexecuted_blocks=1 00:28:55.523 00:28:55.523 ' 00:28:55.523 14:23:58 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:28:55.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:55.523 --rc genhtml_branch_coverage=1 00:28:55.523 --rc genhtml_function_coverage=1 00:28:55.523 --rc genhtml_legend=1 00:28:55.523 --rc geninfo_all_blocks=1 00:28:55.523 --rc geninfo_unexecuted_blocks=1 00:28:55.523 00:28:55.523 ' 00:28:55.523 14:23:58 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:28:55.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:55.523 --rc genhtml_branch_coverage=1 00:28:55.523 --rc genhtml_function_coverage=1 00:28:55.523 --rc genhtml_legend=1 00:28:55.523 --rc geninfo_all_blocks=1 00:28:55.523 --rc geninfo_unexecuted_blocks=1 00:28:55.523 00:28:55.523 ' 00:28:55.523 14:23:58 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:28:55.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:55.523 --rc genhtml_branch_coverage=1 00:28:55.523 --rc genhtml_function_coverage=1 00:28:55.523 --rc genhtml_legend=1 00:28:55.523 --rc geninfo_all_blocks=1 00:28:55.523 --rc geninfo_unexecuted_blocks=1 00:28:55.523 00:28:55.523 ' 00:28:55.523 14:23:58 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:55.523 14:23:58 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:28:55.523 14:23:58 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:55.523 14:23:58 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:55.523 14:23:58 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.523 14:23:58 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.523 14:23:58 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.523 14:23:58 -- paths/export.sh@5 -- $ export PATH 00:28:55.524 14:23:58 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.524 14:23:58 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:28:55.524 14:23:58 -- common/autobuild_common.sh@440 -- $ date +%s 00:28:55.524 14:23:58 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1733667838.XXXXXX 00:28:55.524 14:23:58 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1733667838.FZu6By 00:28:55.524 14:23:58 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:28:55.524 14:23:58 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:28:55.524 14:23:58 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:28:55.524 14:23:58 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:28:55.524 14:23:58 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:28:55.524 14:23:58 -- common/autobuild_common.sh@456 -- $ get_config_params 00:28:55.524 14:23:58 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:28:55.524 14:23:58 -- common/autotest_common.sh@10 -- $ set +x 00:28:55.524 14:23:58 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:28:55.524 14:23:58 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:28:55.524 14:23:58 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:28:55.524 14:23:58 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:28:55.524 14:23:58 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:28:55.524 14:23:58 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:28:55.524 14:23:58 -- spdk/autopackage.sh@19 -- $ timing_finish 00:28:55.524 14:23:58 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:28:55.524 14:23:58 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:28:55.524 14:23:58 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:28:55.524 14:23:58 -- spdk/autopackage.sh@20 -- $ exit 0 00:28:55.524 + [[ -n 4982 ]] 00:28:55.524 + sudo kill 4982 00:28:55.535 [Pipeline] } 00:28:55.552 [Pipeline] // timeout 00:28:55.559 [Pipeline] } 00:28:55.576 [Pipeline] // stage 00:28:55.582 [Pipeline] } 00:28:55.598 [Pipeline] // catchError 00:28:55.609 [Pipeline] stage 00:28:55.612 [Pipeline] { (Stop VM) 00:28:55.624 [Pipeline] sh 00:28:55.911 + vagrant halt 00:28:58.454 ==> default: Halting domain... 00:29:05.055 [Pipeline] sh 00:29:05.340 + vagrant destroy -f 00:29:07.917 ==> default: Removing domain... 00:29:08.504 [Pipeline] sh 00:29:08.789 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:29:08.799 [Pipeline] } 00:29:08.833 [Pipeline] // stage 00:29:08.839 [Pipeline] } 00:29:08.853 [Pipeline] // dir 00:29:08.858 [Pipeline] } 00:29:08.872 [Pipeline] // wrap 00:29:08.878 [Pipeline] } 00:29:08.891 [Pipeline] // catchError 00:29:08.901 [Pipeline] stage 00:29:08.904 [Pipeline] { (Epilogue) 00:29:08.917 [Pipeline] sh 00:29:09.204 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:29:13.411 [Pipeline] catchError 00:29:13.413 [Pipeline] { 00:29:13.425 [Pipeline] sh 00:29:13.709 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:29:13.709 Artifacts sizes are good 00:29:13.720 [Pipeline] } 00:29:13.734 [Pipeline] // catchError 00:29:13.750 [Pipeline] archiveArtifacts 00:29:13.764 Archiving artifacts 00:29:13.847 [Pipeline] cleanWs 00:29:13.854 [WS-CLEANUP] Deleting project workspace... 00:29:13.854 [WS-CLEANUP] Deferred wipeout is used... 00:29:13.859 [WS-CLEANUP] done 00:29:13.860 [Pipeline] } 00:29:13.872 [Pipeline] // stage 00:29:13.877 [Pipeline] } 00:29:13.887 [Pipeline] // node 00:29:13.890 [Pipeline] End of Pipeline 00:29:13.930 Finished: SUCCESS